1 / 90

PARALLEL COMPUTING WITH MPI

PARALLEL COMPUTING WITH MPI. Anne Weill-Zrahia With acknowledgments to Cornell Theory Center. Introduction to Parallel Computing. Parallel computer :A set of processors that work cooperatively to solve a computational problem.

Télécharger la présentation

PARALLEL COMPUTING WITH MPI

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PARALLEL COMPUTINGWITH MPI Anne Weill-Zrahia With acknowledgments to Cornell Theory Center

  2. Introduction to Parallel Computing • Parallel computer :A set of processors that work cooperatively to solve a computational problem. • Distributed computing : a number of processors communicating over a network • Metacomputing : Use of several parallel computers

  3. Why parallel computing • Single processor performance – limited by physics • Multiple processors – break down problem into simple tasks or domains • Plus – obtain same results as in sequential program, faster. • Minus – need to rewrite code

  4. Parallel classification • Parallel architectures Shared Memory / Distributed Memory • Programming paradigms Data parallel / Message passing

  5. Shared memory P P P P Memory

  6. Shared Memory • Each processor can access any part of the memory • Access times are uniform (in principle) • Easier to program (no explicit message passing) • Bottleneck when several tasks access same location

  7. Data-parallel programming • Single program defining operations • Single memory • Loosely synchronous (completion of loop) • Parallel operations on array elements

  8. Distributed Memory • Processor can only access local memory • Access times depend on location • Processors must communicate via explicit message passing

  9. Distributed Memory Processor Memory Processor Memory Interconnection network

  10. Message Passing Programming • Separate program on each processor • Local Memory • Control over distribution and transfer of data • Additional complexity of debugging due to communications

  11. Performance issues • Concurrency – ability to perform actions simultaneously • Scalability – performance is not impaired by increasing number of processors • Locality – high ration of local memory accesses/remote memory accesses (or low communication)

  12. SP2 Benchmark • Goal : Checking performance of real world applications on the SP2 • Execution time (seconds):CPU time for applications • Speedup Execution time for 1 processor = ------------------------------------ Execution time for p processors

  13. WHAT is MPI? • A message- passing library specification • Extended message-passing model • Not specific to implementation or computer

  14. BASICS of MPI PROGRAMMING • MPI is a message-passing library • Assumes : a distributed memory architecture • Includes : routines for performing communication (exchange of data and synchronization) among the processors.

  15. Message Passing • Data transfer + synchronization • Synchronization : the act of bringing one or more processes to known points in their execution • Distributed memory: memory split up into segments, each may be accessed by only one process.

  16. Message Passing May I send? yes Send data

  17. MPI STANDARD • Standard by consensus, designed in an open forum • Introduced by the MPI FORUM in May 1994, updated in June 1995. • MPI-2 (1998) produces extensions to the MPI standard

  18. IS MPI Large or Small? • A large number of features has been included (blocking/non-blocking , collective vs p.t.p,efficiency features) However ... • A small subset of functions is sufficient

  19. Why use MPI ? • Standardization • Portability • Performance • Richness • Designed to enable libraries

  20. Writing an MPI Program • If there is a serial version , make sure it is debugged • If not, try to write a serial version first • When debugging in parallel , start with a few nodes first.

  21. Format of MPI routines

  22. Six useful MPI functions

  23. Communication routines

  24. End MPI part of program

  25. #include “mpi.h”; int main( int argc, char *argv[]){ int tag=100; int rank,size,i; MPI_Status * statuschar message[12]; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&size); MPI_Comm_rank(MPI_COMM_WORLD,&rank); strcpy(message,"Hello,world"); if (rank==0) for (i=1;i<size;i++){ MPI_Send(message,12,MPI_CHAR,i,tag,MPI_COMM_WORLD); } } elseMPI_Recv(message,12,MPI_CHAR,0,tag,MPI_COMM_WORLD,&status); printf("node %d : %s \n",rank,message); MPI_Finalize(); return 0; }

  26. MPI Messages • DATA data to be sent • ENVELOPE – information to route the data.

  27. Description of MPI_Send (MPI_Recv)

  28. Description of MPI_Send (MPI_Recv)

  29. Some useful remarks • Source= MPI_ANY_SOURCE means that any source is acceptable • Tags specified by sender and receiver must match, or MPI_ANY_TAG : any tag is acceptable • Communicator must be the same for send/receive. Usually : MPI_COMM_WORLD

  30. POINT-TO-POINT COMMUNICATION • Transmission of a message between one pair of processes • Programmer can choose mode of transmission • Programmer can choose mode of transmission

  31. Can be chosen by programmer …or let the system decide Synchronous mode Ready mode Buffered mode Standard mode MODE of TRANSMISSION

  32. BLOCKING /NON-BLOCKING COMMUNICATIONS

  33. BLOCKING STANDARD SEND Date transfer from source complete MPI_SEND Size>threshold Task waits S R wait Transfer begins when MPI_RECV has been posted MPI_RECV Task continues when data transfer to buffer is complete

  34. NON BLOCKING STANDARD SEND Date transfer from source complete MPI_ISEND MPI_WAIT Size>threshold Task waits S R wait Transfer begins when MPI_IRECV has been posted MPI_IRECV MPI_WAIT No interruption if wait is late enough

  35. BLOCKING STANDARD SEND MPI_SEND Size<=threshold Data transfer from source complete S R Transfer to buffer on receiver MPI_RECV Task continues when data transfer to user’sbuffer is complete

  36. NON BLOCKING STANDARD SEND Date transfer from source complete MPI_ISEND MPI_WAIT Size<=threshold No delay even though message is not yet in buffer on R S R Transfer to buffer can be avoided if MPI_IRECV posted early enough MPI_IRECV MPI_WAIT No delay if wait is late enough

  37. BLOCKING COMMUNICATION

  38. NON-BLOCKING

  39. Deadlock program (cont) if ( irank.EQ.0 ) then idest = 1 isrc = 1 isend_tag = ITAG_A irecv_tag = ITAG_B else if ( irank.EQ.1 ) then idest = 0 isrc = 0 isend_tag = ITAG_B irecv_tag = ITAG_A end if C ----------------------------------------------------------------C send and receive messagesC ------------------------------------------------------------- print *, " Task ", irank, " has sent the message" call MPI_Send ( rmessage1, MSGLEN, MPI_REAL, idest, isend_tag, . MPI_COMM_WORLD, ierr ) call MPI_Recv ( rmessage2, MSGLEN, MPI_REAL, isrc, irecv_tag, . MPI_COMM_WORLD, istatus, ierr ) print *, " Task ", irank, " has received the message" call MPI_Finalize (ierr) end

  40. DEADLOCK example MPI_RECV MPI_SEND A B MPI_SEND MPI_RECV

  41. Deadlock example • SP2 implementation:No Receive has been posted yet,so both processes block • Solutions Different ordering Non-blocking calls MPI_Sendrecv

  42. Determining Information about Messages • Wait • Test • Probe

  43. MPI_WAIT • Useful for both sender and receiver of non-blocking communications • Receiving process blocks until message is received, under programmer control • Sending process blocks until send operation completes, at which time the message buffer is available for re-use

  44. MPI_WAIT compute transmit S R MPI_WAIT

  45. MPI_TEST MPI_TEST compute transmit S MPI_Isend R

  46. MPI_TEST • Used for both sender and receiver of non-blocking communication • Non-blocking call • Receiver checks to see if a specific sender has sent a message that is waiting to be delivered ... messages from all other senders are ignored

  47. MPI_TEST (cont.) Sender can find out if the message-buffer can be re-used ... have to wait until operation is complete before doing so

  48. MPI_PROBE • Receiver is notified when messages from potentially any sender arrive and are ready to be processed. • Blocking call

More Related