1 / 19

More MPI

More MPI. Point-to-point Communication. Involves a pair of processes One process sends a message Other process receives the message. Send/Receive Not Collective. Function MPI_Send. int MPI_Send ( void *message, int count, MPI_Datatype datatype , int dest ,

hasad-mckee
Télécharger la présentation

More MPI

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. More MPI

  2. Point-to-point Communication • Involves a pair of processes • One process sends a message • Other process receives the message

  3. Send/Receive Not Collective

  4. Function MPI_Send intMPI_Send ( void *message, int count, MPI_Datatypedatatype, intdest, int tag, MPI_Commcomm )

  5. Function MPI_Recv intMPI_Recv ( void *message, int count, MPI_Datatypedatatype, int source, int tag, MPI_Commcomm, MPI_Status *status )

  6. MPI_Send MPI_Recv Inside MPI_Send and MPI_Recv Sending Process Receiving Process Program Memory System Buffer System Buffer Program Memory

  7. Return from MPI_Send • Function blocks until message buffer free • Message buffer is free when • Message copied to system buffer, or • Message transmitted • Typical scenario • Message copied to system buffer • Transmission overlaps computation

  8. Return from MPI_Recv • Function blocks until message in buffer • If message never arrives, function never returns • Which leads us to …

  9. Example float a, b, c; int id; MPI_Status status; … if (id == 0) { MPI_Recv (&b, 1, MPI_FLOAT, 1, 0, MPI_COMM_WORLD, &status); MPI_Send (&a, 1, MPI_FLOAT, 1, 0, MPI_COMM_WORLD); c = (a + b) / 2.0; } else if (id == 1) { MPI_Recv (&a, 1, MPI_FLOAT, 0, 0, MPI_COMM_WORLD, &status); MPI_Send (&b, 1, MPI_FLOAT, 0, 0, MPI_COMM_WORLD); c = (a + b) / 2.0; } CS 491 – Parallel and Distributed Computing

  10. Example float a, b, c; int id; MPI_Status status; … if (id == 0) { MPI_Send (&a, 1, MPI_FLOAT, 1, 0, MPI_COMM_WORLD); MPI_Recv (&b, 1, MPI_FLOAT, 1, 1, MPI_COMM_WORLD, &status); c = (a + b) / 2.0; } else if (id == 1) { MPI_Send (&b, 1, MPI_FLOAT, 0, 0, MPI_COMM_WORLD); MPI_Recv (&a, 1, MPI_FLOAT, 0, 1, MPI_COMM_WORLD, &status); c = (a + b) / 2.0; } CS 491 – Parallel and Distributed Computing

  11. Deadlock • Deadlock: process waiting for a condition that will never become true • Easy to write send/receive code that deadlocks • Two processes: both receive before send • Send tag doesn’t match receive tag • Process sends message to wrong destination process

  12. Coding Send/Receive … if (ID == j) { … Receive from i … } … if (ID == i) { … Send to j … } … Receive is before Send. Why does this work?

  13. Distributing Data Scatter / Gather CS 491 – Parallel and Distributed Computing

  14. Getting Data Places • There are many interesting ways to arrange and distribute data for parallel use. • Many of these follow some fairly common “patterns” – basic structures • MPI Standards group wanted to provide flexible ways to distribute data • Uses variation on the concepts of “scatter” and “gather” CS 491 – Parallel and Distributed Computing

  15. Collective Communications Broadcast the coefficients to all processors. Scatter the vectors among N processors as zpart, xpart, and ypart. Gather the results back to the root processor when completed. Calls can return as soon as their participation is complete.

  16. Scatter intMPI_Scatter(void* sendbuf, intsendcount, MPI_Datatypesendtype, void* recvbuf, intrecvcount, MPI_Datatyperecvtype, introot, MPI_Commcomm) CS 491 – Parallel and Distributed Computing

  17. Gather intMPI_Gather(void* sendbuf, intsendcount, MPI_Datatypesendtype, void* recvbuf, intrecvcount, MPI_Datatyperecvtype, introot, MPI_Commcomm) CS 491 – Parallel and Distributed Computing

  18. Scatterv intMPI_Scatterv(void* sendbuf, int*sendcounts, int*displs, MPI_Datatypesendtype, void* recvbuf, intrecvcount, MPI_Datatyperecvtype, introot, MPI_Commcomm) CS 491 – Parallel and Distributed Computing

  19. Gatherv intMPI_Gatherv(void* sendbuf, intsendcount, MPI_Datatypesendtype, void* recvbuf, int*recvcounts, int*displs, MPI_Datatyperecvtype, introot, MPI_Commcomm) CS 491 – Parallel and Distributed Computing

More Related