1 / 26

An Introduction to Parallel Programming with MPI

An Introduction to Parallel Programming with MPI February 17, 19, 24, 26 2004 David Adams http://research.cs.vt.edu/lasca/schedule Creating Accounts https://admin.cs.vt.edu/cslab/index.pl Outline Disclaimers Overview of basic parallel programming on a cluster with the goals of MPI

albert
Télécharger la présentation

An Introduction to Parallel Programming with MPI

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An Introduction to Parallel Programming with MPI February 17, 19, 24, 26 2004 David Adams http://research.cs.vt.edu/lasca/schedule

  2. Creating Accounts • https://admin.cs.vt.edu/cslab/index.pl

  3. Outline • Disclaimers • Overview of basic parallel programming on a cluster with the goals of MPI • Batch system interaction • Startup procedures • Quick review • Blocking message passing • Non-blocking message passing • Lab day • Collective communications

  4. Review • Functions we have covered in detail: • MPI_INIT MPI_FINALIZE • MPI_COMM_SIZE MPI_COMM_RANK • MPI_SEND MPI_RECV • Useful constants: • MPI_COMM_WORLD MPI_ANY_SOURCE • MPI_ANY_TAG MPI_SUCCESS

  5. Motivating Example for Deadlock SEND RECV RECV SEND RECV SEND …

  6. Motivating Example for Deadlock Timestep: 1

  7. Motivating Example for Deadlock Timestep: 2

  8. Motivating Example for Deadlock Timestep: 3

  9. Motivating Example for Deadlock Timestep: 4

  10. Motivating Example for Deadlock Timestep: 5

  11. Motivating Example for Deadlock Timestep: 6

  12. Motivating Example for Deadlock Timestep: 7

  13. Motivating Example for Deadlock Timestep: 8

  14. Motivating Example for Deadlock Timestep: 9

  15. Motivating Example for Deadlock Timestep: 10!

  16. Solution • MPI_SENDRECV(sendbuf, sendcount, sendtype, dest, sendtag, recvbuf, recvcount, recvtype, source, recvtag, comm, status, ierror) • The semantics of a send-receive operation is what would be obtained if the caller forked two concurrent threads, one to execute the send, and one to execute the receive, followed by a join of these two threads.

  17. Nonblocking Message Passing • Allows for the overlap of communication and computation. • Completion of a message is broken into four steps instead of two. • post-send • complete-send • post-receive • complete-receive

  18. Posting Operations • MPI_ISEND (BUF, COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR) • IN <type> BUF(*) • IN INTEGER, COUNT, DATATYPE, DEST, TAG, COMM, • OUT IERROR, REQUEST • MPI_IRECV (BUF, COUNT, DATATYPE, SOURCE, TAG, COMM, REQUEST, IERROR) • IN <type> BUF(*) • IN INTEGER, COUNT, DATATYPE, SOURCE, TAG, COMM, • OUT IERROR, REQUEST

  19. Request Objects • All nonblocking communications use request objects to identify communication operations and link the posting operation with the completion operation. • Conceptually, they can be thought of as a pointer to a specific message instance floating around in MPI space. • Just as in pointers, request handles must be treated with care or you can create request handle leaks (like a memory leak) and completely lose access to the status of a message.

  20. Request Objects • The value MPI_REQUEST_NULL is used to indicate an invalid request handle. Operations that deallocate request objects set the request handle to this value. • Posting operations allocate memory for request objects and completion operations deallocate that memory and clean up the space.

  21. Completion Operations • MPI_WAIT(REQUEST, STATUS, IERROR) • INOUT INTEGER REQUEST • OUT STATUS, IERROR • A call to MPI_WAIT returns when the operation identified by REQUEST is complete. • MPI_WAIT is the blocking version of completion operations where the program has determined it can’t do any more useful work without completing the current message. In this case, it chooses to block until the corresponding send or receive completes. • In iterative parallel code, it is often the case that an MPI_WAIT is placed directly before the next post operation that intends to use the same request object variable. • Successful completion of the function MPI_WAIT will set REQUEST=MPI_REQUEST_NULL.

  22. Completion Operations • MPI_TEST(REQUEST, FLAG, STATUS, IERROR) • INOUT INTEGER REQUEST • OUT STATUS(MPI_STATUS_SIZE) • OUT LOGICAL FLAG • A call to MPI_TEST returns flag=true if the operation identified by REQUEST is complete. • MPI_TEST is the nonblocking version of completion operations. • If flag=true then MPI_TEST will clean up the space associated with REQUEST, deallocating the memory and setting REQUEST = MPI_REQUEST_NULL. • MPI_TEST allows the user to create code that can attempt to communicate as much as possible but continue doing useful work if messages are not ready.

  23. Maximizing Overlap • To achieve maximum overlap between computation and communication, communications should be started as soon as possible and completed as late as possible. • Sends should be posted as soon as the data to be sent is available. • Receives should be posted as soon as the receive buffer can be used. • Sends should be completed just before the send buffer is to be reused. • Receives should be completed just before the data in the buffer is to be reused. • Overlap can often be increased by reordering the computation.

  24. Setting up your account for MPI • http://courses.cs.vt.edu/~cs4234/MPI/first_exercise.html • List of all MCB 124 machines • http://www.cslab.vt.edu/124.shtml

  25. More Stuff • http://courses.cs.vt.edu/~cs4234/MPI/first_exercise.html • Put /home/grads/raghavgn/mpich-1.2.5/bin in your path. • –open file .bash_profile and append :/home/grads/raghavgn/mpich-1.2.5/bin • PATH=$PATH:$HOME/bin • PATH=$PATH:$HOME/bin:/home/grads/raghavgn/mpich-1.2.5/bin • Make a subdirectory, mkdir MPI, and cd to it. • cp -r /home/grads/daadams3/MPI .

  26. Compilation and Execution • Two folders, one for C and one for FORTRAN77 • Hello world example For C: • Compile and link: mpicc -o hello hello.c • Run on 4 processors: mpirun -np 4 –machinefile ../mymachines hello • For Fortran • Compile and link: mpif77 –o hello hello.f • Run on 4 processors: mpirun -np 4 hello –machinefile ../mymachines hello

More Related