Download
ece1747 parallel programming n.
Skip this Video
Loading SlideShow in 5 Seconds..
ECE1747 Parallel Programming PowerPoint Presentation
Download Presentation
ECE1747 Parallel Programming

ECE1747 Parallel Programming

237 Vues Download Presentation
Télécharger la présentation

ECE1747 Parallel Programming

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. ECE1747 Parallel Programming Shared Memory Multithreading Pthreads

  2. Shared Memory • All threads access the same shared memory data space. Shared Memory Address Space proc1 proc2 proc3 procN

  3. Shared Memory (continued) • Concretely, it means that a variable x, a pointer p, or an array a[] refer tothe same object, no matter what processor the reference originates from. • We have more or less implicitly assumed this to be the case in earlier examples.

  4. Shared Memory a proc1 proc2 proc3 procN

  5. Distributed Memory - Message Passing The alternative model to shared memory. mem1 mem2 mem3 memN a a a a proc1 proc2 proc3 procN network

  6. Shared Memory vs. Message Passing • Same terminology is used in distinguishing hardware. • For us: distinguish programming models, not hardware.

  7. Programming vs. Hardware • One can implement • a shared memory programming model • on shared or distributed memory hardware • (also in software or in hardware) • One can implement • a message passing programming model • on shared or distributed memory hardware

  8. Portability of programming models shared memory programming message passing programming shared memory machine distr. memory machine

  9. Shared Memory Programming: Important Point to Remember • No matter what the implementation, it conceptually looks like shared memory. • There may be some (important) performance differences.

  10. Multithreading • User has explicit control over thread. • Good: control can be used to performance benefit. • Bad: user has to deal with it.

  11. Pthreads • POSIX standard shared-memory multithreading interface. • Provides primitives for process management and synchronization.

  12. What does the user have to do? • Decide how to decompose the computation into parallel parts. • Create (and destroy) processes to support that decomposition. • Add synchronization to make sure dependences are covered.

  13. General Thread Structure • Typically, a thread is a concurrent execution of a function or a procedure. • So, your program needs to be restructured such that parallel parts form separate procedures or functions.

  14. Example of Thread Creation (contd.) main() pthread_ create(func) func()

  15. Thread Joining Example void *func(void *) { ….. } pthread_t id; int X; pthread_create(&id, NULL, func, &X); ….. pthread_join(id, NULL); …..

  16. Example of Thread Creation (contd.) main() pthread_ create(func) func() pthread_ join(id) pthread_ exit()

  17. Sequential SOR for some number of timesteps/iterations { for (i=0; i<n; i++ ) for( j=1, j<n, j++ ) temp[i][j] = 0.25 * ( grid[i-1][j] + grid[i+1][j] grid[i][j-1] + grid[i][j+1] ); for( i=0; i<n; i++ ) for( j=1; j<n; j++ ) grid[i][j] = temp[i][j]; }

  18. Parallel SOR • First (i,j) loop nest can be parallelized. • Second (i,j) loop nest can be parallelized. • Must wait to start second loop nest until all processors have finished first. • Must wait to start first loop nest of next iteration until all processors have second loop nest of previous iteration. • Give n/p rows to each processor.

  19. Pthreads SOR: Parallel parts (1) void* sor_1(void *s) { int slice = (int) s; int from = (slice*n)/p; int to = ((slice+1)*n)/p; for( i=from; i<to; i++) for( j=0; j<n; j++ ) temp[i][j] = 0.25*(grid[i-1][j] + grid[i+1][j] +grid[i][j-1] + grid[i][j+1]); }

  20. Pthreads SOR: Parallel parts (2) void* sor_2(void *s) { int slice = (int) s; int from = (slice*n)/p; int to = ((slice+1)*n)/p; for( i=from; i<to; i++) for( j=0; j<n; j++ ) grid[i][j] = temp[i][j]; }

  21. Pthreads SOR: main for some number of timesteps { for( i=0; i<p; i++ ) pthread_create(&thrd[i], NULL, sor_1, (void *)i); for( i=0; i<p; i++ ) pthread_join(thrd[i], NULL); for( i=0; i<p; i++ ) pthread_create(&thrd[i], NULL, sor_2, (void *)i); for( i=0; i<p; i++ ) pthread_join(thrd[i], NULL); }

  22. Summary: Thread Management • pthread_create(): creates a parallel thread executing a given function (and arguments), returns thread identifier. • pthread_exit(): terminates thread. • pthread_join(): waits for thread with particular thread identifier to terminate.

  23. Summary: Program Structure • Encapsulate parallel parts in functions. • Use function arguments to parameterize what a particular thread does. • Call pthread_create() with the function and arguments, save thread identifier returned. • Call pthread_join() with that thread identifier.

  24. Pthreads Synchronization • Create/exit/join • provide some form of synchronization, • at a very coarse level, • requires thread creation/destruction. • Need for finer-grain synchronization • mutex locks, • condition variables.

  25. Use of Mutex Locks • To implement critical sections. • Pthreads provides only exclusive locks. • Some other systems allow shared-read, exclusive-write locks.

  26. Barrier Synchronization • A wait at a barrier causes a thread to wait until all threads have performed a wait at the barrier. • At that point, they all proceed.

  27. Implementing Barriers in Pthreads • Count the number of arrivals at the barrier. • Wait if this is not the last arrival. • Make everyone unblock if this is the last arrival. • Since the arrival count is a shared variable, enclose the whole operation in a mutex lock-unlock.

  28. Implementing Barriers in Pthreads void barrier() { pthread_mutex_lock(&mutex_arr); arrived++; if (arrived<N) { pthread_cond_wait(&cond, &mutex_arr); } else { pthread_cond_broadcast(&cond); arrived=0; /* be prepared for next barrier */ } pthread_mutex_unlock(&mutex_arr); }

  29. Parallel SOR with Barriers (1 of 2) void* sor (void* arg) { int slice = (int)arg; int from = (slice * (n-1))/p + 1; int to = ((slice+1) * (n-1))/p + 1; for some number of iterations { … } }

  30. Parallel SOR with Barriers (2 of 2) for (i=from; i<to; i++) for (j=1; j<n; j++) temp[i][j] = 0.25 * (grid[i-1][j] + grid[i+1][j] + grid[i][j-1] + grid[i][j+1]); barrier(); for (i=from; i<to; i++) for (j=1; j<n; j++) grid[i][j]=temp[i][j]; barrier();

  31. Parallel SOR with Barriers: main int main(int argc, char *argv[]) { pthread_t *thrd[p]; /* Initialize mutex and condition variables */ for (i=0; i<p; i++) pthread_create (&thrd[i], &attr, sor, (void*)i); for (i=0; i<p; i++) pthread_join (thrd[i], NULL); /* Destroy mutex and condition variables */ }

  32. Note again • Many shared memory programming systems (other than Pthreads) have barriers as basic primitive. • If they do, you should use it, not construct it yourself. • Implementation may be more efficient than what you can do yourself.

  33. Busy Waiting • Not an explicit part of the API. • Available in a general shared memory programming environment.

  34. Busy Waiting initially: flag = 0; P1: produce data; flag = 1; P2: while( !flag ) ; consume data;

  35. Use of Busy Waiting • On the surface, simple and efficient. • In general, not a recommended practice. • Often leads to messy and unreadable code (blurs data/synchronization distinction). • May be inefficient

  36. Private Data in Pthreads • To make a variable private in Pthreads, you need to make an array out of it. • Index the array by thread identifier, which you should keep track of . • Not very elegant or efficient.

  37. Other Primitives in Pthreads • Set the attributes of a thread. • Set the attributes of a mutex lock. • Set scheduling parameters.

  38. ECE 1747 Parallel Programming Machine-independent Performance Optimization Techniques

  39. Returning to Sequential vs. Parallel • Sequential execution time: t seconds. • Startup overhead of parallel execution: t_st seconds (depends on architecture) • (Ideal) parallel execution time: t/p + t_st. • If t/p + t_st > t, no gain.

  40. General Idea • Parallelism limited by dependences. • Restructure code to eliminate or reduce dependences. • Sometimes possible by compiler, but good to know how to do it by hand.

  41. Optimizations: Example 16 for (i = 0; i < 100000; i++) a[i + 1000] = a[i] + 1; Cannot be parallelized as is. May be parallelized by applying certain code transformations.

  42. Summary • Reorganize code such that • dependences are removed or reduced • large pieces of parallel work emerge • loop bounds become known • … • Code can become messy … there is a point of diminishing returns.

  43. Factors that Determine Speedup • Characteristics of parallel code • granularity • load balance • locality • communication and synchronization

  44. Granularity • Granularity = size of the program unit that is executed by a single processor. • May be a single loop iteration, a set of loop iterations, etc. • Fine granularity leads to: • (positive) ability to use lots of processors • (positive) finer-grain load balancing • (negative) increased overhead

  45. Granularity and Critical Sections • Small granularity => more processors => more critical section accesses => more contention.

  46. Issues in Performance of Parallel Parts • Granularity. • Load balance. • Locality. • Synchronization and communication.

  47. Load Balance • Load imbalance = different in execution time between processors between barriers. • Execution time may not be predictable. • Regular data parallel: yes. • Irregular data parallel or pipeline: perhaps. • Task queue: no.