130 likes | 240 Vues
This guide covers shared memory programming and message passing as implemented in POSIX threads. It delves into key concepts including thread creation and synchronization, using mutexes and condition variables for thread safety, and the essential function calls such as `pthread_create`, `pthread_join`, as well as messaging paradigms for interprocess communication. With practical code examples, it illustrates the implementation of probabilistic estimations and collective operations for efficient computing. This resource is invaluable for mastering concurrent programming techniques in C.
E N D
More Shared Memory ProgrammingAnd Intro to Message Passing Laxmikant Kale CS433
Posix Threads on Origin 2000 • Shared memory programming on Origin 2000: Important calls • Thread creation and joining • pthread_create(pthread_t *threadID, At,functionName, (void *) arg); • pthread_join(pthread_t, threadID, void **result); • Locks • pthread_mutex_t lock; • pthread_mutex_lock(&lock); • pthread_mutex_unlock(&lock); • Condition variables: • pthread_cond_t cv; • pthread_cond_init(&cv, (pthread_condattr_t *) 0); • pthread_cond_wait(&cv, &cv_mutex); • pthread_cond_broadcast(&cv); • Semaphores, and other calls
Declarations /* pgm.c */ #include <pthread.h> #include <stdlib.h> #include <stdio.h> #define nThreads 4 #define nSamples 1000000 typedef struct _shared_value { pthread_mutex_t lock; int value; } shared_value; shared_value sval;
Function in each thread void *doWork(void *id) { size_t tid = (size_t) id; int nsucc, ntrials, i; ntrials = nSamples/nThreads; nsucc = 0; srand48((long) tid); for(i=0;i<ntrials;i++) { double x = drand48(); double y = drand48(); if((x*x + y*y) <= 1.0) nsucc++; } pthread_mutex_lock(&(sval.lock)); sval.value += nsucc; pthread_mutex_unlock(&(sval.lock)); return 0; }
Main function int main(int argc, char *argv[]) { pthread_t tids[nThreads]; size_t i; double est; pthread_mutex_init(&(sval.lock), NULL); sval.value = 0; printf("Creating Threads\n"); for(i=0;i<nThreads;i++) pthread_create(&tids[i], NULL, doWork, (void *) i); printf("Created Threads... waiting for them to complete\n"); for(i=0;i<nThreads;i++) pthread_join(tids[i], NULL); printf("Threads Completed...\n"); est = 4.0 * ((double) sval.value / (double) nSamples); printf("Estimated Value of PI = %lf\n", est); exit(0); }
Compiling : Makefile # Makefile #for solaris FLAGS = -mt #for Origin2000 #FLAGS = pgm: pgm.c cc -o pgm $(FLAGS) pgm.c -lpthread clean: rm -f pgm *.o *~
Message Passing • Program consists of independent processes, • Each running in its own address space • Processors have direct access to only their memory • Each processor typically executes the same executable, but may be running different part of the program at a time • Special primitives exchange data: send/receive • Early theoretical systems: • CSP: communicating sequential processes • send and matching receive from another processor: both wait. • OCCAM on Transputers used this model • Performance problems due to unnecessary(?) wait • Current systems: • Send operations don’t wait for receipt on remote processor
Message Passing send receive copy data data PE0 PE1
Basic Message Passing • We will describe a hypothetical message passing system, • with just a few calls that define the model • Later, we will look at real message passing models (e.g. MPI), with a more complex sets of calls • Basic calls: • send(int proc, int tag, int size, char *buf); • recv(int proc, int tag, int size, char * buf); • Recv may return the actual number of bytes received in some systems • tag and proc may be wildcarded in a recv: • recv(ANY, ANY, 1000, &buf); • broadcast: • Other global operations (reductions)
Pi with message passing Int count, c1; main() { Seed s = makeSeed(myProcessor); for (I=0; I<100000/P; I++) { x = random(s); y = random(s); if (x*x + y*y < 1.0) count++; } send(0,1,4, &count);
Pi with message passing if (myProcessorNum() == 0) { for (I=0; I<maxProcessors(); I++) { recv(I,1,4, c); count += c; } printf(“pi=%f\n”, 4*count/100000); } } /* end function main */
Collective calls • Message passing is often, but not always, used for SPMD style of programming: • SPMD: Single process multiple data • All processors execute essentially the same program, and same steps, but not in lockstep • All communication is almost in lockstep • Collective calls: • global reductions (such as max or sum) • syncBroadcast (often just called broadcast): • syncBroadcast(whoAmI, dataSize, dataBuffer); • whoAmI: sender or receiver
Standardization of message passing • Historically: • nxlib (On Intel hypercubes) • ncube variants • PVM • Everyone had their own variants • MPI standard: • Vendors, ISVs, and academics got together • with the intent of standardizing current practice • Ended up with a large standard • Popular, due to vendor support • Support for • communicators: avoiding tag conflicts, .. • Data types: • ..