1 / 62

Introduction to Parallel Programming Concepts

Introduction to Parallel Programming Concepts. beatnic@rcc.its.psu.edu Research Computing and Cyberinfrastructure. Index. Introductions Serial vs. Parallel Parallelization concepts Flynn’s classical taxonomy Programming Models and Architectures Designing Parallel Programs

gilda
Télécharger la présentation

Introduction to Parallel Programming Concepts

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Parallel Programming Concepts beatnic@rcc.its.psu.edu Research Computing and Cyberinfrastructure

  2. Index • Introductions • Serial vs. Parallel • Parallelization concepts • Flynn’s classical taxonomy • Programming Models and Architectures • Designing Parallel Programs • Speedup and its relatives • Parallel examples

  3. Serial computing • Traditionally, software has been written for serial computation: • To be run on a single computer having a single Central Processing Unit (CPU); • A problem is broken into a discrete series of instructions. • Instructions are executed one after another. • Only one instruction may execute at any moment in time.

  4. What is parallel computing? • In simplest sense, parallel computing is the simultaneous use of multiple compute resources to solve a computational problem • Multiple CPUs • Discretization of the problem for concurrency • Simultaneous execution on different CPUs

  5. Uses for Parallel computing • Historically, parallel computing has been considered to be "the high end of computing", and has been used to model difficult scientific and engineering problems found in the real world. • Today, commercial applications provide an equal or greater driving force in the development of faster computers. These applications require the processing of large amounts of data in sophisticated ways. For example: • Databases, data mining • Oil exploration • Web search engines, web based business services • Medical imaging and diagnosis • Management of national and multi-national corporations • Financial and economic modeling • Advanced graphics and virtual reality, particularly in the entertainment industry • Networked video and multi-media technologies • Collaborative work environments

  6. Why use parallel computing • Save time and money : more resources at a task will shorten its time to completion, with potential cost savings. • Parallel clusters can be built from cheap, commodity components • Solve larger problems : Large and/or complex which are impractical or impossible to solve them on a single computer, especially given limited computer memory. • Web search • Provide concurrency : • Access Grid provides a global collaboration network where people from around the world can meet and conduct work "virtually" • Use of non-local resources: • SETI@home : about 300,000 computers delivering 528 TFlops • folding@home : around 340,000 delivering 4.2PetaFlops • Limits to serial computing: • Transmission speeds • Limits to miniaturization • Economic limitations • Current computer architectures are increasingly relying upon hardware level parallelism to improve performance: • Multiple execution units • Pipelined instructions • Multi-core

  7. Parallelization Concepts • When performing task, some subtasks depend on one another, while others do not • Example: Preparing dinner • Salad prep independent of lasagna baking • Lasagna must be assembled before baking • Likewise, in solving scientific problems, some tasks independent of one another

  8. Serial vs. Parallel: Example Example: Preparing dinner • Serial tasks: making sauce, assembling lasagna, baking lasagna; washing lettuce, cutting vegetables, assembling salad • Parallel tasks: making lasagna, making salad, setting table

  9. Serial vs. Parallel: Example • Could have several chefs, each performing one parallel task • This is concept behind parallel computing

  10. Discussion: Jigsaw Puzzle • Suppose we want to do 5000 piece jigsaw puzzle • Time for one person to complete puzzle: n hours • How can we decrease walltime to completion?

  11. Discussion: Jigsaw Puzzle • Add another person at the table • Effect on wall time • Communication • Resource contention • Add p people at the table • Effect on wall time • Communication • Resource contention

  12. Parallel computing in a nutshell • Compute resources: • A single computer with multiple processors; • A number of computers connected by a network; • A combination of both. • Problems are solved in parallel only when : • Broken apart into discrete pieces of work that can be solved simultaneously; • Execute multiple program instructions at any moment in time; • Solved in less time with multiple compute resources than with a single compute resource

  13. Flynn’s Classical Taxonomy SISD Single Instruction, Single Data SIMD Single Instruction, Multiple Data Each processing unit operates on the data independently via independent instructionstreams. Few actual examples of this class of parallel computer have ever existed. • Every processor may be executing a different instruction stream • Every processor may be working with a different data stream • Single instruction: only one instruction stream is being acted on by the CPU during any one clock cycle • Single data: only one data stream is being used as input during any one clock cycle • Examples: older generation mainframes, minicomputers and workstations; most modern day PCs (single processor) MIMD Multiple Instruction, Multiple Data • All processing units execute the same instruction at any given clock cycle • Each processing unit can operate on a different data element • Best suited for specialized problems characterized by a high degree of regularity, such as graphics/image processing MISD Multiple Instruction, Single Data

  14. What next? • Parallel Computer Memory Architecture • Shared Memory • Distributed Memory • Hybrid Distributed-Shared Memory • Parallel Programming models • Threads • Message Passing • Data Parallel • Hybrid

  15. Shared Memory • Shared global address space • Multiple processors can operate independently but share the same memory resources • Two classes : • Uniform Memory Access : Equal access and access time to memory • Non-Uniform Memory Access : Not all processors have equal access to all memories

  16. Shared Memory • Advantages : • Global address space provides a user-friendly programming perspective to memory • Data sharing between tasks is both fast and uniform due to the proximity of memory to CPUs • Disadvantages : • Lack of scalability between memory and CPUs. Adding more CPUs can geometrically increases traffic on the shared memory-CPU path • Programmer responsibility for synchronization constructs that insure "correct" access of global memory.

  17. Distributed Memory • No concept of global address space across all processors • Changes it makes to its local memory have no effect on the memory of other processors • It is the task of the programmer to explicitly define how and when data is communicated and Synchronized • The network "fabric" used for data transfer varies widely, though it can can be as simple as Ethernet.

  18. Distributed Memory • Advantages: • Increase the number of processors and the size of memory increases proportionately. • Rapid access to memory for each processor • Cost effectiveness: can use commodity, off-the-shelf processors and networking. • Disadvantages: • Complex programming compared to SMP • It may be difficult to map existing data structures, based on global memory, to this memory organization.

  19. Hybrid Memory • The shared memory component is usually a cache coherent SMP machine. • The distributed memory component is the networking of multiple SMPs.

  20. Parallel Programming Models • Abstraction above hardware and memory architectures • Overall classification: • Task Parallelism • Data Parallelism • Classification based on memory models • Threads • Message Passing • Data Parallel

  21. Threads Model • A single process has multiple, concurrent execution paths • Threads are scheduled and run by the operating system concurrently • Threads communicate with each other through global memory • Commonly associated with shared memory architectures • OpenMp • Compiler directives

  22. Message Passing Model • Tasks exchange data through communications by sending and receiving messages • Task use their local memory during computation • Data transfer usually requires cooperative operations to be performed by each process. • a send operation must have a matching receive operation.

  23. Data Parallel Model • Most of the parallel work focuses on performing operations on a data set. • Tasks collectively work on the same data structure. • Each task works on a different partition of the same data structure • Tasks perform the same operation on their partition of work

  24. Review • What is parallel computing? • Jigsaw puzzle example • Flynn’s classical taxonomy • Parallel computer architectures • Shared memory • Distributed memory • Hybrid • Parallel programming models • Threads • Message passing • Data parallel

  25. Designing Parallel Programs

  26. Understanding the problem • Example of non-parallelizable problem • Problem is parallelizable? • Example of parallelizable problem Calculation of the Fibonacci series (1,1,2,3,5,8,13,21,...) by use of the formula: F(k + 2) = F(k + 1) + F(k) • Calculate the potential energy for each of several thousand independent conformations of a molecule. When done, find the minimum energy conformation. • Can be solved in parallel • Each of the molecular conformations is independently determinable. • The calculation of the minimum energy conformation is also a parallelizable problem • This is a non-parallelizable problem • calculation of the Fibonacci sequence as shown would entail dependent calculations rather than independent ones • The calculation of the k + 2 value uses those of both k + 1 and k

  27. Hotspots and Bottlenecks • Identify the program’s hotspots • The majority of scientific and technical programs usually accomplish most of their work in a few places. • Profilers and performance analysis tools • Focus on parallelizing the hotspots and ignore those sections of the program that account for little CPU usage. • Identify the bottlenecks • Are there areas that are disproportionately slow, or cause parallelizable work to halt or be deferred? For example, I/O is usually something that slows a program down. • May be possible to restructure the program or use a different algorithm to reduce or eliminate unnecessary slow areas • Identify inhibitors to parallelism • Data dependence • Investigate other algorithms if possible.

  28. Partitioning • Break the problem into discrete "chunks" of work that can be distributed to multiple tasks • Domain decomposition • Function decomposition

  29. Domain Decomposition • Data is decomposed and distributed Ways to partition data

  30. Functional Decomposition • Focus is on the computation • Functional decomposition lends itself well to problems that can be split into different tasks

  31. Examples for Functional Decomposition • Signal Processing • Ecosystem modeling • Climate Modeling

  32. Enough of classifications… • Ok… enough. Too many classifications..its confusing. Tell me what are things to consider before writing parallel code. • Communication • Synchronization • Load Balance • Granularity • and finally, Speed up

  33. Communications • Need of communication depends on your problem • No need of communication • Some types of problems can be decomposed and executed in parallel with virtually no need for tasks to share data • embarrassingly parallel problems of image processing • Need of communication • Most parallel require tasks to share data with each other • Heat diffusion equation

  34. Inter-task Communications • Cost of communications • Inter-task communication virtually always implies overhead. • Machine cycles and resources that could be used for computation are instead used to package and transmit data. • Communications frequently require some type of synchronization between tasks, which can result in tasks spending time "waiting" instead of doing work. • Competing communication traffic can saturate the available network bandwidth, further aggravating performance problems

  35. Inter-task Communications • Latency vs. Bandwidth • Latency is the time it takes to send a minimal message from point A to point B. • microseconds • Bandwidth is the amount of data that can be communicated per unit of time. • megabytes/sec or gigabytes/sec • Sending many small messages can cause latency to dominate communication overheads • Package small messages into a larger message • increase the effective communications bandwidth

  36. Inter-task communications • Synchronous communication • "handshaking" between tasks that are sharing data • explicitly structured in code by the programmer • blocking communications • Asynchronous communication • allow tasks to transfer data independently from one another • non-blocking communications • Interleaving computation with communication

  37. Examples of collective communication Scope of communications • Knowing which tasks must communicate with each other is critical • Point-to-point : involves two tasks • one task acting as the sender/producer of data • another acting as the receiver/consumer • Collective : data sharing between more than two tasks • members in a common group 3 5 1 9 18 Broadcast Reduction Gather Scatter

  38. Synchronization • Barrier : • Usually implies that all tasks are involved • Each task performs its work until it reaches the barrier. It then stops, or "blocks" • When the last task reaches the barrier, all tasks are synchronized • tasks are automatically released to continue their work

  39. Load Balancing • Distributing work among tasks so that all tasks are kept busy all of the time • The slowest task will determine the overall performance.

  40. Achieve Load Balance • Equally partition the work each task receives • For array/matrix operations where each task performs similar work, evenly distribute the data • For loop iterations where the work done in each iteration is similar, evenly distribute the iterations • Using performance analysis tool to adjust the work • Dynamic work assignment • Even if data is evenly distributed among tasks: • Sparse arrays - some tasks will have actual data to work on while others have mostly "zeros". • Adaptive grid methods - some tasks may need to refine their mesh while others don't. • N-body simulations - where some particles may migrate to/from their original task domain to another task's; where the particles owned by some tasks require more work than those owned by other tasks. • Scheduler-task pool approach • Detect and handle load imbalances dynamically

  41. Granularity • Ratio of computation to communication • Fine-grain Parallelism • Relatively small amounts of computational work between communication events • Low computation to communication ratio • Facilitates load balancing • Implies high communication overhead and less opportunity for performance enhancement • If granularity is too fine it is possible that the overhead required for communications and synchronization between tasks takes longer than the computation. • Coarse-grain parallelism • Relatively large amounts of computational work between communication/synchronization events • High computation to communication ratio • Implies more opportunity for performance increase • Harder to load balance efficiently

  42. Granularity - What is Best? • The most efficient granularity is dependent on the algorithm and the hardware environment in which it runs. • Overhead associated with communications and synchronization is high relative to execution speed • Advantage coarse granularity. • Better load balance with fine-grain parallelism

  43. Speedup and Efficiency Speedup Measure of how much faster the computation executes versus the best serial code • Serial time divided by parallel time Example: Painting a picket fence • 30 minutes of preparation (serial) • One minute to paint a single picket • 30 minutes of cleanup (serial) Thus, 300 pickets takes 360 minutes (serial time)

  44. Computing Speedup Illustrates Amdahl’s Law Potential speedup is restricted by serial portion What if fence owner uses spray gun to paint 300 pickets in one hour? • Better serial algorithm • If no spray guns are available for multiple workers, what is maximum parallel speedup?

  45. Amdhal’s law

  46. Scalable problem • Problems that increase the percentage of parallel time with their size are more scalable than problems with a fixed percentage of parallel time • Example: • certain problems demonstrate increased performance by increasing the problem size 2D Grid Calculations 85 sec 85% Serial fraction 15 sec 15% • increase the problem size by doubling the grid dimensions and halving the time step. • four times the number of grid points • twice the number of time steps 2D Grid Calculations 680 sec 97.84% Serial fraction 15 sec 2.16%

  47. Parallel Examples • Array processing • PI calculation • Simple heat equation

  48. Array Processing • Computation on each array element being independent • Serial code do j = 1,n do i = 1,n a(i,j) = fcn(i,j) end do end do • Embarrassingly parallel situation

  49. Array ProcessingParallel Solution - 1 • Arrays elements are distributed so that each processor owns a portion of an array • Independent calculation of array elements insures there is no need for communication between tasks • Block-cyclic diagrams do j = mystart, myend do i = 1,n a(i,j) = fcn(i,j) end do end do • Notice that only the outer loop variables are different from the serial solution.

  50. Array ProcessingParallel Solution - 1 find out if I am MASTER or WORKER if I am MASTER initialize the array send each WORKER info on part of array it owns send each WORKER its portion of initial array receive from each WORKER results else if I am WORKER receive from MASTER info on part of array I own receive from MASTER my portion of initial array # calculate my portion of array do j = my first column,my last column do i = 1,n a(i,j) = fcn(i,j) end do end do send MASTER results endif • Static load balancing • Each task has fixed amount of work • May be significant idle time for faster or more lightly loaded processors • slowest tasks determines overall performance • Not a major concern if all tasks are performing the same amount of work on identical machines • Load balance is a problem? • ‘Pool of tasks’ approach

More Related