1 / 125

并 行 算 法 概述

并 行 算 法 概述. 目录. 1. 并行计算模型 2. 并行算法的基本设计技术. Von Neumann Model. Instruction Processing. Fetch instruction from memory. Decode instruction. Evaluate address. Fetch operands from memory. Execute operation. Store result. Parallel Computing Model. Computing model Bridge between SW and HW

atira
Télécharger la présentation

并 行 算 法 概述

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 并 行 算 法 概述

  2. 目录 • 1.并行计算模型2.并行算法的基本设计技术

  3. Von Neumann Model

  4. Instruction Processing Fetch instruction from memory Decode instruction Evaluate address Fetch operands from memory Execute operation Store result

  5. Parallel Computing Model • Computing model • Bridge between SW and HW • general purpose HW, scalable HW • transportable SW • Abstract architecture for algorithm development • Ex) PRAM, BSP, LogP

  6. Parallel Programming Model • What programmer uses in coding applications? • Specifies communication and synchronization • Communication primitives exposed to user-level realizes the programming model • Ex) Uniprocessor, Multiprogramming, Data parallel, message-passing, shared-address-space

  7. Interconnection Network Memory Memory Memory Memory P P P P P P P P P P P P P P P P Multiprocessors Multiprocessors Multiprocessors Multiprocessors Aspects of Parallel Processing Algorithm developer 4 Application developer 3 Parallel computing model Parallel programming model System programmer 2 Middleware 1 Architecture designer

  8. Parallel Computing Models • PRAM • Parallel Random Access Memory • A set of p processors • Global shared memory • Each processor can access any memory location in one time step • Globally synchronized • Executing same program in lockstep

  9. Illustration of PRAM Single program executed in MIMD mode CLK Each processor has a unique index. P1 P2 P3 Pp Shared Memory P processors connected to a single shared memory

  10. Features • Model architecture • Synchronized RAM with common clock, but not SIMD operation: MIMD • No local memory in each RAM • One global shared memory • single address space architecture • Synchronization, communication, parallelism overhead are zero.

  11. Features (Cont’d) • Operations per step • Read/write a word from/to the memory • Local operation • An instruction could perform the following three operations in one cycle • Fetch one or two words from the memory as operands • Perform an arithmetic/logic operation • Store the result back in memory

  12. Problems with PRAM • Inaccurate description of real-world parallel systems • Unaccounted costs • Latency, bandwidth, non-local memory access, memory access contention issues, synchronization costs, etc • Algorithms perceived to work well in PRAM may have poor performance in practice

  13. PRAM Variants • Variants arise to model some of these costs • Each introduces some practical aspect of machine • Gives algorithm designer better idea for optimization • Variants can be grouped into 4 categories • Memory access • Synchronization • Latency • Bandwidth

  14. Memory Access • Impractical to have concurrent read and write to same memory location • Contention issues • CRCW PRAM • CPRAM-CRCW(Common PRAM-CRCW):仅允许写入相同数据 • PPRAM-CRCW(Priority PRAM-CRCW):仅允许优先级最高的处理器写入 • APRAM-CRCW(Arbitrary PRAM-CRCW):允许任意处理器自由写入 • EREW or CREW PRAM • QRQW (queue-read, queue-write ) • Expensive • Multiple ports required for concurrent access maybe prohibitively expensive.

  15. Synchronization • Standard PRAM globally synchronized • Standard PRAM model do not charge a cost for synchronization • Unrealistic! Synchronization is necessary and expensive in practical parallel systems • Variants model cost of synchronization • APRAM (asynchrony PRAM ):每个处理器有其局部存储器、局部时钟、局部程序;无全局时钟,各处理器异步执行;处理器通过SM进行通讯;处理器间依赖关系,需在并行程序中显式地加入同步路障 • XPRAM (bulk synchronous PRAM, also known as BSP model) • Provides an incentive for algorithm designers to synchronize only when necessary

  16. Synchronization (cont.) • BPRAM (Block-Parallel RAM) • assumes n nodes, each containing a processor and a memory module, interconnected by a communication medium. • A computation is a sequence of phases, called supersteps: in one superstep, each processor can execute operations on data residing in the local memory; send messages; execute a global synchronization instruction. • Charge L units to access 1st message and b units for each subsequent contiguous block

  17. Latency • Standard PRAM assumes unit-cost for non-local memory access • In practice, non-local memory access has severe effect on performance • PRAM variant • LPRAM (Local-memory PRAM) • A set of nodes each with a processor and a local memory; • the nodes can communicate through a globally shared memory. • Two types of steps are defined and separately accounted for: computation steps, where each processor performs one operation on local data, and communication steps, where each processor can write, and then read a word from global memory • Charge a cost of L units to access global memory

  18. Bandwidth • Standard PRAM assumes unlimited bandwidth • In practice, bandwidth is limited • PRAM Variant • DRAM (Distribution random access machine) • 2 level memory hierarchy • Access to global memory is charged a cost based on possible data congestion • PRAM(m) • Global memory segmented into modules • Any given step, only m memory accesses can be serviced

  19. Other Distributed Models • Distributed Memory Model • No global memory • Each processor associated with some local memory • Postal Model • Processor sends request for non-local memory • Instead of stalling, it continues working while data is en-route

  20. Network Models • Focus on impact of topology of communications network • Early focus of parallel computation • Distributed Memory Model? • Cost of remote memory access is a function of both topology and the access pattern • Provides incentives for efficient • Data mappings • Communications routing

  21. LogP • Model design strongly influenced by trends in parallel computer design • Model of a distributed memory multiprocessor • Processors communicate via point to point messages • Attempts to capture important bottleneck of parallel machines

  22. LogP • Specifies performance characteristics of communication network. • Provide incentive for clever data placement • Illustrates importance of balanced communication

  23. Parallel Machine Trends • Machine organization for most parallel machines is similar • A collection of complete computers • Microprocessor • Cache memory • Sizable DRAM memory • Connected by robust communications network • No single programming methodology is dominant

  24. Other considerations • Processor Count • Number of nodes relative to • price of most expensive supercomputer / cost of node • Communication Interval lags far behind processor memory bandwidth • Presence of adaptive routing and fault-recovery networking systems • Affects algorithm design • Parallel algorithms developed with large number of data elements per processor • Attempts to exploit network topology or processor count is not very robust

  25. Model Parameters • Latency (L) • Delay incurred in communicating a message from source to destination • Hop count and Hop delay • Communication Overhead (o) • Length of time a processor is engaged in sending or receiving a message • Node overhead for processing a send or receive • Communication bandwidth (g) • Minimum time interval between messages • Processor count (P) • Processor count

  26. LogP Model g sender o receiver L o t

  27. Bulk Synchronous Parallel • Bulk Synchronous Parallel(BSP) • P processors with local memory • Router • Facilities for periodic global synchronization • Every l steps • Models • Bandwidth limitations • Latency • Synchronization costs • Does not model • Communication overhead • Processor topology

  28. BSP Computer • Distributed memory architecture • 3 components • Node • Processor • Local memory • Router (Communication Network) • Point-to-point, message passing (or shared variable) • Barrier synchronizing facility • All or subset

  29. P P P M M M Illustration of BSP Node (w) Node Node Barrier (l) Communication Network (g)

  30. Three Parameters • w parameter • Maximum computation time within each superstep • Computation operation takes at most w cycles. • g parameter • # of cycles for communication of unit message when all processors are involved in communication - network bandwidth • hThe maximum number of incoming or outgoing messages for a superstep • Communication operation takes gh cycles. • l parameter • Barrier synchronization takes l cycles.

  31. BSP Program • A BSP computation consists of S super steps. • A superstep is a sequence of steps followed by a barrier synchronization. • Superstep • Any remote memory accesses take effect at barrier - loosely synchronous

  32. BSP Program P1 P2 P3 P4 Superstep 1 Computation Communication Barrier Superstep 2

  33. Model Survey Summary • No single model is acceptable! • Between models, subset of characteristics are focused in majority of models • Computational Parallelism • Communication Latency • Communication Overhead • Communication Bandwidth • Execution Synchronization • Memory Hierarchy • Network Topology

  34. Computational Parallelism • Number of physical processors • Static versus dynamic parallelism • Should number of processors be fixed? • Fault-recovery networks allow for node failure • Many parallel systems allow incremental upgrades by increasing node count

  35. Latency • Fixed message length or variable message length? • Network topology? • Communication Overhead? • Contention based latency? • Memory hierarchy?

  36. Bandwidth • Limited resource • With low latency • Tendency for bandwidth abuse by flooding network

  37. Synchronization • Ability to solve a wide class of problems require asynchronous parallelism • Synchronization achieved via message passing • Synchronization as a communication cost

  38. Unified Model? • Difficult • Parallel machines are complicated • Still evolving • Different users from diverse disciplines • Requires a common set of characteristics derived from needs of different users • Again need for balance between descriptivity and prescriptivity

  39. Algorithms and Concurrency • Introduction to Parallel Algorithms • Tasks and Decomposition • Processes and Mapping • Decomposition Techniques • Recursive Decomposition • Data Decomposition • Exploratory Decomposition • Hybrid Decomposition • Characteristics of Tasks and Interactions • Task Generation, Granularity, and Context • Characteristics of Task Interactions.

  40. Concurrency and Mapping • Mapping Techniques for Load Balancing • Static and Dynamic Mapping • Methods for Minimizing Interaction Overheads • Maximizing Data Locality • Minimizing Contention and Hot-Spots • Overlapping Communication and Computations • Replication vs. Communication • Group Communications vs. Point-to-Point Communication • Parallel Algorithm Design Models • Data-Parallel, Work-Pool, Task Graph, Master-Slave, Pipeline, and Hybrid Models

  41. Preliminaries: Decomposition, Tasks, and Dependency Graphs • The first step in developing a parallel algorithm is to decompose the problem into tasks that can be executed concurrently • A given problem may be decomposed into tasks in many different ways • Tasks may be of same, different, or even interminate sizes • A decomposition can be illustrated in the form of a directed graph with nodes corresponding to tasks and edges indicating that the result of one task is required for processing the next. Such a graph is called a task dependency graph

  42. Example: Multiplying a Dense Matrix with a Vector Computation of each element of output vector y is independent of other elements. Based on this, a dense matrix-vector product can be decomposed into n tasks. The figure highlights the portion of the matrix and vector accessed by Task 1. Observations: While tasks share data (namely, the vector b ), they do not have any control dependencies - i.e., no task needs to wait for the (partial) completion of any other. All tasks are of the same size in terms of number of operations. Is this the maximum number of tasks we could decompose this problem into?

  43. Example: Database Query Processing Consider the execution of the query: MODEL = ``CIVIC'' AND YEAR = 2001 AND (COLOR = ``GREEN'' OR COLOR = ``WHITE) on the following database:

  44. Example: Database Query Processing The execution of the query can be divided into subtasks in various ways. Each task can be thought of as generating an intermediate table of entries that satisfy a particular clause. Decomposing the given query into a number of tasks. Edges in this graph denote that the output of one task is needed to accomplish the next.

  45. Example: Database Query Processing Note that the same problem can be decomposed into subtasks in other ways as well. An alternate decomposition of the given problem into subtasks, along with their data dependencies. Different task decompositions may lead to significant differences with respect to their eventual parallel performance.

  46. Granularity of Task Decompositions • The number of tasks into which a problem is decomposed determines its granularity. • Decomposition into a large number of tasks results in fine-grained decomposition and that into a small number of tasks results in a coarse grained decomposition. A coarse grained counterpart to the dense matrix-vector product example. Each task in this example corresponds to the computation of three elements of the result vector.

  47. Degree of Concurrency • The number of tasks that can be executed in parallel is the degree of concurrency of a decomposition. • Since the number of tasks that can be executed in parallel may change over program execution, the maximum degree of concurrency is the maximum number of such tasks at any point during execution. • The average degree of concurrency is the average number of tasks that can be processed in parallel over the execution of the program. • The degree of concurrency increases as the decomposition becomes finer in granularity and vice versa.

  48. Critical Path Length • A directed path in the task dependency graph represents a sequence of tasks that must be processed one after the other. • The longest such path determines the shortest time in which the program can be executed in parallel. • The length of the longest path in a task dependency graph is called the critical path length.

  49. Critical Path Length Consider the task dependency graphs of the two database query decompositions: What are the critical path lengths for the two task dependency graphs? How many processors are needed in each case to achieve this minimum parallel execution time? What is the maximum degree of concurrency?

  50. Task Interaction Graphs • Subtasks generally exchange data with others in a decomposition. • For example, even in the trivial decomposition of the dense matrix-vector product, if the vector is not replicated across all tasks, they will have to communicate elements of the vector. • The graph of tasks (nodes) and their interactions/data exchange (edges) is referred to as a task interaction graph. • Note that task interaction graphs represent data dependencies, whereas task dependency graphs represent control dependencies.

More Related