1 / 50

Programming Models and All That

Programming Models and All That. Evolution of architectures vis a vis programming models Fundamental issues in convergence architectures and prog. models Parallel programming Process of parallelization What parallel programs look like in major programming models

wauna
Télécharger la présentation

Programming Models and All That

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Programming Models and All That • Evolution of architectures vis a vis programming models • Fundamental issues in convergence architectures and prog. models • Parallel programming • Process of parallelization • What parallel programs look like in major programming models • Programming for performance (next class)

  2. Parallel Programming Model • What the programmer uses in writing applications • Specifies communication and synchronization • Examples: • Shared address space: like bulletin board • Message passing: like letters or phone calls, explicit point to point • Data parallel: more regimented, global actions on data • Implemented with shared address space or message passing

  3. CAD Database Scientific modeling Parallel applications Multipr ogramming Shar ed Message Data Pr ogramming models addr ess passing parallel Compilation Communication abstraction or library User/system boundary Operating systems support Har dwar e/softwar e boundary Communication hardware Physical communication medium Modern Layered Framework

  4. Evolution of Architectural Models • Historically, machines were tailored to programming models • Programming model, communication abstraction, and machine organization lumped together as the “architecture” • Understanding their evolution helps understand convergence • Identify core concepts • Evolution of Architectural Models: • Shared Address Space (SAS) • Message Passing • Data Parallel • Others (won’t discuss): Dataflow, Systolic Arrays • Examine programming model, motivation, and convergence

  5. Shared Address Space Architectures • Any processor can directly reference any memory location • Communication occurs implicitly as result of loads and stores • Convenient: • Location transparency • Similar programming model to time-sharing on uniprocessors • Except processes run on different processors • Good throughput on multiprogrammed workloads • Naturally provided on wide range of platforms • History dates at least to precursors of mainframes in early 60s • Wide range of scale: few to hundreds of processors • Popularly known as shared memory machines or model • Ambiguous: memory may be physically distributed among processors

  6. Machine physical address space Virtual address spaces for a collection of processes communicating via shared addresses P p r i v a t e n L o a d P n Common physical addresses P 2 P 1 P 0 S t o r e P p r i v a t e 2 Shared portion of address space P p r i v a t e 1 Private portion of address space P p r i v a t e 0 Shared Address Space Model • Process: virtual address space plus one or more threads of control • Portions of address spaces of processes are shared • Writes to shared address visible to other threads (in other processes too) • Natural extension of uniprocessor model: conventional memory operations for comm.; special atomic operations for synchronization • OS uses shared memory to coordinate processes

  7. I/O devices Mem Mem Mem Mem I/O ctrl I/O ctrl Inter connect Inter connect Pr ocessor Pr ocessor Communication Hardware for SAS • Also natural extension of uniprocessor • Already have processor, one or more memory modules and I/O controllers connected by hardware interconnect of some sort • Memory capacity increased by adding modules, I/O by controllers Add processors for processing!

  8. P P I/O C I/O C M M M M I/O I/O C C M M $ $ P P History of SAS Architecture • “Mainframe” approach • Motivated by multiprogramming • Extends crossbar used for mem bw and I/O • Originally processor cost limited to small • later, cost of crossbar • Bandwidth scales with p • High incremental cost; use multistage instead • “Minicomputer” approach • Almost all microprocessor systems have bus • Motivated by multiprogramming, TP • Used heavily for parallel computing • Called symmetric multiprocessor (SMP) • Latency larger than for uniprocessor • Bus is bandwidth bottleneck • caching is key: coherence problem • Low incremental cost

  9. CPU P-Pr o P-Pr o P-Pr o 256-KB module module module Interrupt L $ contr oller 2 Bus interface P-Pr o bus (64-bit data, 36-bit addr ess, 66 MHz) PCI PCI Memory bridge bridge contr oller PCI MIU PCI bus PCI bus I/O car ds 1-, 2-, or 4-way interleaved DRAM Example: Intel Pentium Pro Quad • All coherence and multiprocessing glue integrated in processor module • Highly integrated, targeted at high volume • Low latency and bandwidth

  10. Example: SUN Enterprise • Memory on processor cards themselves • 16 cards of either type: processors + memory, or I/O • But all memory accessed over bus, so symmetric • Higher bandwidth, higher latency bus

  11. “Dance hall” Distributed memory Scaling Up • Problem is interconnect: cost (crossbar) or bandwidth (bus) • Dance-hall: bandwidth still scalable, but lower cost than crossbar • latencies to memory uniform, but uniformly large • Distributed memory or non-uniform memory access (NUMA) • Construct shared address space out of simple message transactions across a general-purpose network (e.g. read-request, read-response) • Caching shared (particularly nonlocal) data?

  12. Example: Cray T3E • Scale up to 1024 processors, 480MB/s links • Memory controller generates comm. request for nonlocal references • Communication architecture tightly integrated into node • No hardware mechanism for coherence (SGI Origin etc. provide this)

  13. Caches and Cache Coherence • Caches play key role in all cases • Reduce average data access time • Reduce bandwidth demands placed on shared interconnect • But private processor caches create a problem • Copies of a variable can be present in multiple caches • A write by one processor may not become visible to others • They’ll keep accessing stale value in their caches • Cache coherence problem • Need to take actions to ensure visibility

  14. P P P 2 1 3 u = ? u = ? u = 7 3 $ 5 4 $ $ u :5 u :5 1 I/O devices 2 u :5 Memory Example Cache Coherence Problem • Processors see different values for u after event 3 • With write back caches, value written back to memory depends on happenstance of which cache flushes or writes back value when • Processes accessing main memory may see very stale value • Unacceptable to programs, and frequent!

  15. Cache Coherence • Reading a location should return latest value written (by any process) • Easy in uniprocessors • Except for I/O: coherence between I/O devices and processors • But infrequent, so software solutions work • Would like same to hold when processes run on different processors • E.g. as if the processes were interleaved on a uniprocessor • But coherence problem much more critical in multiprocessors • Pervasive and performance-critical • A very basic design issue in supporting the prog. model effectively • It’s worse than that: what is the “latest” value with indept. processes? • Memory consistency models

  16. SGI Origin2000 • Hub chip provides memory control, communication and cache coherence support • Plus I/O communication etc

  17. Shared Address Space Machines Today • Bus-based, cache coherent at small scale • Distributed memory, cache-coherent at larger scale • Without cache coherence, are essentially (fast) message passing systems • Clusters of these at even larger scale

  18. Match Receive Y , P , t Addr ess Y Send X, Q, t Addr ess X Local pr ocess Local pr ocess addr ess space addr ess space Pr ocess P Pr ocess Q Message-Passing Programming Model • Send specifies data buffer to be transmitted and receiving process • Recv specifies sending process and application storage to receive into • Optional tag on send and matching rule on receive • Memory to memory copy, but need to name processes • User process names only local data and entities in process/tag space • In simplest form, the send/recv match achieves pairwise synch event • Other variants too • Many overheads: copying, buffer management, protection

  19. Message Passing Architectures • Complete computer as building block, including I/O • Communication via explicit I/O operations • Programming model: directly access only private address space (local memory), comm. via explicit messages (send/receive) • High-level block diagram similar to distributed-memory SAS • But comm. needn’t be integrated into memory system, only I/O • History of tighter integration, evolving to spectrum incl. clusters • Easier to build than scalable SAS • Can use clusters of PCs or SMPs on a LAN • Programming model more removed from basic hardware operations • Library or OS intervention

  20. Evolution of Message-Passing Machines • Early machines: FIFO on each link • Hw close to prog. Model; synchronous ops • Replaced by DMA, enabling non-blocking ops • Buffered by system at destination until recv • Diminishing role of topology • Store&forward routing: topology important • Introduction of pipelined routing made it less so • Cost is in node-network interface • Simplifies programming

  21. Example: IBM SP-2 • Made out of essentially complete RS6000 workstations • Network interface integrated in I/O bus (bw limited by I/O bus) • Doesn’t need to see memory references

  22. Example Intel Paragon • Network interface integrated in memory bus, for performance

  23. Toward Architectural Convergence • Evolution and role of software have blurred boundary • Send/recv supported on SAS machines via buffers • Can construct global address space on MP using hashing • Software shared memory (e.g. using pages as units of comm.) • Hardware organization converging too • Tighter NI integration even for MP (low-latency, high-bandwidth) • At lower level, even hardware SAS passes hardware messages • Hw support for fine-grained comm makes software MP faster as well • Even clusters of workstations/SMPs are parallel systems • Emergence of fast system area networks (SAN) • Programming models distinct, but organizations converged • Nodes connected by general network and communication assists • Assists range in degree of integration, all the way to clusters

  24. Data Parallel Systems • Programming model • Operations performed in parallel on each element of data structure • Logically single thread of control, performs sequential or parallel steps • Conceptually, a processor associated with each data element • Architectural model • Array of many simple, cheap processors with little memory each • Processors don’t sequence through instructions • Attached to a control processor that issues instructions • Specialized and general communication, cheap global synchronization • Original motivations • Matches simple differential equation solvers • Centralize high cost of instruction fetch/sequencing

  25. Application of Data Parallelism • Each PE contains an employee record with his/her salary If salary > 100K then salary = salary *1.05 else salary = salary *1.10 • Logically, the whole operation is a single step • Someprocessors enabled for arithmetic operation, others disabled • Other examples: • Finite differences, linear algebra, ... • Document searching, graphics, image processing, ... • Some recent machines: • Thinking Machines CM-1, CM-2 (and CM-5) • Maspar MP-1 and MP-2,

  26. Evolution and Convergence • Rigid control structure (SIMD in Flynn taxonomy) • SISD = uniprocessor, MIMD = multiprocessor • Popular when cost savings of centralized sequencer high • 60s when CPU was a cabinet • Replaced by vectors in mid-70s • More flexible w.r.t. memory layout and easier to manage • Revived in mid-80s when 32-bit datapath slices just fit on chip • No longer true with modern microprocessors • Other reasons for demise • Simple, regular applications have good locality, can do well anyway • Loss of applicability due to hardwiring data parallelism • MIMD machines as effective for data parallelism and more general • Prog. model converges withSPMD (single program multiple data) • Contributes need for fast global synchronization • Structured global address space, implemented with either SAS or MP

  27. Convergence: Generic Parallel Architecture • A generic modern multiprocessor • Node: processor(s), memory system, plus communication assist • Network interface and communication controller • Scalable network • Communication assist provides primitives with perf profile • Build your programming model on this • Convergence allows lots of innovation, now within framework • Integration of assist with node, what operations, how efficiently...

  28. Programming Models and All That • Evolution of architectures vis a vis programming models • Fundamental issues in convergence architectures and prog. models • Parallel programming • Process of parallelization • What parallel programs look like in major programming models • Programming for performance (next class)

  29. The Model/System Contract • Model specifies an interface (contract) to the programmer • Naming: How are logically shared data and/or processes referenced? • Operations: What operations are provided on these data • Ordering: How are accesses to data ordered and coordinated? • Replication: How are data replicated to reduce communication? • Underlying implementation addresses performance issues • Communication Cost: Latency, bandwidth, overhead, occupancy • We’ll look at the aspects of the contract through examples

  30. CAD Database Scientific modeling Parallel applications Multipr ogramming Shar ed Message Data Pr ogramming models addr ess passing parallel Compilation Communication abstraction or library User/system boundary Operating systems support Har dwar e/softwar e boundary Communication hardware Physical communication medium Supporting the Contract • Given prog. model can be supported in various ways at various layers • In fact, each layer takes a position on all issues (naming, ops, performance etc), and any set of positions can be mapped to another by software • Key issues for supporting programming models are: • What primitives are provided at comm. abstraction layer • How efficiently are they supported (hw/sw) • How are programming models mapped to them

  31. Recap of Parallel Architecture • Parallel architecture is important thread in evolution of architecture • At all levels • Multiple processor level now in mainstream of computing • Exotic designs have contributed much, but given way to convergence • Push of technology, cost and application performance • Basic processor-memory architecture is the same • Key architectural issue is in communication architecture • How communication is integrated into memory and I/O system on node • Fundamental design issues • Functional: naming, operations, ordering • Performance: organization, replication, performance characteristics • Design decisions driven by workload-driven evaluation • Integral part of the engineering focus

  32. Programming Models and All That • Evolution of architectures vis a vis programming models • Fundamental issues in convergence architectures and prog. models • Parallel programming • Process of parallelization • What parallel programs look like in major programming models • Programming for performance (next class)

  33. Creating a Parallel Program • Assumption: Sequential algorithm is given • Sometimes need very different algorithm, but beyond scope • Pieces of the job: • Identify work that can be done in parallel • Partition work and perhaps data among processes • Note: work includes computation, data access and I/O • Manage data access, communication and synchronization • Main goal: Speedup (plus low prog. effort and resource needs) • Speedup (p) = • For a fixed problem: • Speedup (p) = Performance(p) Performance(1) Time(1) Time(p)

  34. Some Important Concepts • Task: • Arbitrary piece of undecomposed work in parallel computation • Executed sequentially; concurrency is only across tasks • E.g. a particle/cell in Barnes-Hut, a ray or ray group in Raytrace • Fine-grained versus coarse-grained tasks • Process (thread): • Abstract entity that performs the tasks assigned to processes • Processes communicate and synchronize to perform their tasks • Processor: • Physical engine on which process executes • Processes virtualize machine to programmer • first write program in terms of processes, then map to processors

  35. Partitioning O D A M r e s a c c s p h o i p p p e p p m g i 0 1 0 1 P P s 0 1 p n n t o m g r s e a i n t t t P P 2 3 p i p p p i 3 3 2 2 o o n n Parallel T asks Pr ocesses Pr ocessors pr ogram Steps in Creating a Parallel Program • 4 steps: Decomposition, Assignment, Orchestration, Mapping • Done by programmer or system software (compiler, runtime, ...) • Issues are the same, so assume programmer does it all explicitly Sequential computation

  36. Example Application: Ocean Simulation • Model as two-dimensional grids • Discretize in space and time • finer spatial and temporal resolution => greater accuracy • Many different computations per time step • set up and solve equations • Concurrency across and within grid computations (a) Cross sections (b) Spatial discretization of a cross section

  37. Problem With the Example • Large, Complex Program • Examine a simplified version of a piece of Ocean simulation • Iterative equation solver • Illustrate parallel program in low-level parallel language • C-like pseudocode with simple extensions for parallelism • Expose basic comm. and synch. primitives that must be supported • State of most real parallel programming today

  38. Grid Solver Example • Simplified version of solver in Ocean simulation • Gauss-Seidel (near-neighbor) sweeps to convergence • interior n-by-n points of (n+2)-by-(n+2) grid updated in each sweep • updates done in-place in grid, and difference from prev. value computed • accumulate partial diffs into global diff at end of every sweep • check if error has converged (to within a tolerance parameter) • if so, exit solver; if not, do another sweep

  39. Partitioning O D A M r e s a c c s p h o i p p p e p p m g i 0 1 0 1 P P s 0 1 p n n t o m g r s e a i n t t t P P 2 3 p i p p p i 3 3 2 2 o o n n Sequential Parallel T asks Pr ocesses Pr ocessors computation pr ogram Steps in Creating a Parallel Program • 4 steps: Decomposition, Assignment, Orchestration, Mapping

  40. Decomposition • Simple way to identify concurrency is to look at loop iterations • dependence analysis; if not enough, then look further • Not much concurrency here at this level (all loops sequential) • Examine fundamental dependences, ignoring loop structure • Concurrency O(n) along anti-diagonals, serialization O(n) along diag. • Retain loop structure, use pt-to-pt synch; Problem: too many synch ops. • Restructure loops, use global synch; imbalance and too much synch

  41. Exploit Application Knowledge • Different ordering of updates: may converge quicker or slower • Red sweep and black sweep are each fully parallel: • Global synch between them (conservative but convenient) • Simpler, asynchronous example to illustrate • no red-black, simply ignore dependences within sweep • sequential order same as original, but parallel program nondeterministic • Reorder grid traversal: red-black ordering

  42. Decomposition Only • Decomposition into grid elements (tasks): degree of concurrency n2 • To decompose into rows, make line 18 loop sequential; degree n • for_all leaves assignment to system • but implicit global synch. at end of for_all loop • Let’s assume we decompose into rows

  43. Decomposition More Generally • Break up computation into tasks to be divided among processes • Tasks may become available dynamically • No. of available tasks may vary with time • i.e. identify concurrency and decide level at which to exploit it • Goal: Enough tasks to keep processes busy, but not too many • No. of tasks available at a time is upper bound on achievable speedup

  44. Limited Concurrency: Amdahl’s Law • Most fundamental limitation on parallel speedup • If fraction s of seq execution is inherently serial, speedup <= 1/s • Example: 2-phase calculation • sweep over n-by-n grid and do some independent computation • sweep again and add each value to global sum • Time for first phase = n2/p • Second phase serialized at global variable, so time = n2 • Speedup <= or at most 2 • Trick: divide second phase into two • accumulate into private sum during sweep • add per-process private sum into global sum • Parallel time is n2/p + n2/p + p, and speedup at best p * 2n2 n2 + n2 p 2n2 2n2 + p2

  45. 1 (a) n2 n2 p work done concurrently 1 (b) n2 n2/p p 1 (c) Time n2/p n2/p p Pictorial Depiction

  46.  fk k 1 k=1 1-s   k s + fk p p k=1 Concurrency Profiles • Cannot usually divide into “serial part” and “parallel part” • Area under curve is total work done, or time with 1 processor • Horizontal extent is lower bound on time (infinite processors) • Speedup is the ratio: , base case: • Amdahl’s law applies to any overhead, not just limited concurrency

  47. Partitioning O D A M r e s a c c s p h o i p p p e p p m g i 0 1 0 1 P P s 0 1 p n n t o m g r s e a i n t t t P P 2 3 p i p p p i 3 3 2 2 o o n n Sequential Parallel T asks Pr ocesses Pr ocessors computation pr ogram Steps in Creating a Parallel Program • 4 steps: Decomposition, Assignment, Orchestration, Mapping

  48. i p Assignment in the Grid Solver • Static assignments (given decomposition into rows) • block assignment of rows: Row i is assigned to process • cyclic assignment of rows: process i is assigned rows i, i+p, and so on • Dynamic assignment • get a row index, work on the row, get a new row, and so on • Static assignment into rows reduces concurrency (from n to p) • block assign. reduces communication by keeping adjacent rows together

  49. Assignment More Generally • Specifying mechanism to divide work up among processes • E.g. which process computes which grid points or rows • Together with decomposition, also called partitioning • Goals: Balance workload, reduce communication and management cost • Structured approaches usually work well • Code inspection (parallel loops) or understanding of application • Well-known heuristics • Static versus dynamic assignment • We usually worry about partitioning (decomp. + assignment) first • Usually independent of architecture or prog model • But cost and complexity of using primitives may affect decisions • Let’s dig into orchestration under three programming models

More Related