1 / 15

Point-to-Point Communication

Point-to-Point Communication. Self Test with solution. Self Test. MPI_SEND is used to send an array of 10 4-byte integers. At the time MPI_SEND is called, MPI has over 50 Kbytes of internal message buffer free on the sending process. Choose the best answer.

robinrios
Télécharger la présentation

Point-to-Point Communication

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Point-to-Point Communication Self Test with solution

  2. Self Test • MPI_SEND is used to send an array of 10 4-byte integers. At the time MPI_SEND is called, MPI has over 50 Kbytes of internal message buffer free on the sending process. Choose the best answer. • This is a blocking send. Most MPI implementations will copy the message into MPI internal message buffer and return. • This is a blocking send. Most MPI implementations will block the sending process until the destination process has received the message. • This is a non-blocking send. Most MPI implementations will copy the message into MPI internal message buffer and return.

  3. Self Test • MPI_SEND is used to send an array of 100,000 8-byte reals. At the time MPI_SEND is called, MPI has less than 50 Kbytes of internal message buffer free on the sending process. Choose the best answer. • This is a blocking send. Most MPI implementations will block the calling process until enough message buffer becomes available. • This is a blocking send. Most MPI implementations will block the sending process until the destination process has received the message.

  4. Self Test • MPI_SEND is used to send a large array. When MPI_SEND returns, the programmer may safely assume • The destination process has received the message. • The array has been copied into MPI internal message buffer. • Either the destination process has received the message or the array has been copied into MPI internal message buffer. • None of the above.

  5. Self Test • MPI_ISEND is used to send an array of 10 4-byte integers. At the time MPI_ISEND is called, MPI has over 50 Kbytes of internal message buffer free on the sending process. Choose the best answer. • This is a non-blocking send. MPI will generate a request id and then return. • This a non-blocking send. Most MPI implementations will copy the message into MPI internal message buffer and return.

  6. Self Test • MPI_ISEND is used to send an array of 10 4-byte integers. At the time MPI_ISEND is called, MPI has over 50 Kbytes of internal message buffer free on the sending process. After calling MPI_ISEND, the sending process calls MPI_WAIT to wait for completion of the send operation. Choose the best answer. • MPI_Wait will not return until the destination process has received the message. • MPI_WAIT may return before the destination process has received the message.

  7. Self Test • MPI_ISEND is used to send an array of 100,000 8-byte reals. At the time MPI_ISEND is called, MPI has less than 50 Kbytes of internal message buffer free on the sending process. Choose the best answer. • This is a non-blocking send. MPI will generate a request id and return. • This is a blocking send. Most MPI implementations will block the sending process until the destination process has received the message.

  8. Self Test • MPI_ISEND is used to send an array of 100,000 8-byte reals. At the time MPI_ISEND is called, MPI has less than 50 Kbytes of internal message buffer free on the sending process. After calling MPI_ISEND, the sending process calls MPI_WAIT to wait for completion of the send operation. Choose the best answer. • This is a blocking send. In most implementations, MPI_WAIT will not return until the destination process has received the message. • This is a non-blocking send. In most implementations, MPI_Wait will not return until the destination process has received the message. • This is a non-blocking send. In most implementations, MPI_WAIT will return before the destination process has received the message.

  9. Self Test • Assume the only communicator used in this problem is MPI_COMM_WORLD. After calling MPI_INIT, process 1 immediately sends two messages to process 0. The first message sent has tag 100, and the second message sent has tag 200. After calling MPI_INIT and verifying there are at least 2 processes in MPI_COMM_WORLD, process 0 calls MPI_RECV with the source argument set to 1 and the tag argument set to MPI_TAG_ANY. Choose the best answer. • The tag of the first message received by process 0 is 100. • The tag of the first message received by process 0 is 200. • The tag of the first message received by process 0 is either 100 or 200. • Based on the information given, one cannot safely assume any of the above.

  10. Self Test • Assume the only communicator used in this problem is MPI_COMM_WORLD. After calling MPI_INIT, process 1 immediately sends two messages to process 0. The first message sent has tag 100, and the second message sent has tag 200. After calling MPI_INIT and verifying there are at least 2 processes in MPI_COMM_WORLD, process 0 calls MPI_RECV with the source argument set to 1 and the tag argument set to 200. Choose the best answer. • Process 0 is deadlocked, since it attempted to receive the second message before receiving the first. • Process 0 receives the second message sent by process 1, even though the first message has not yet been received. • None of the above.

  11. Answer • A • B • C • A • B • A • B • A • B

  12. Course Problem

  13. Course Problem • Description • The initial problem implements a parallel search of an extremely large (several thousand elements) integer array. The program finds all occurrences of a certain integer, called the target, and writes all the array indices where the target was found to an output file. In addition, the program reads both the target value and all the array elements from an input file. • Exercise • Go ahead and write the real parallel code for the search problem! Using the pseudo-code from the previous chapter as a guide, fill in all the sends and receives with calls to the actual MPI send and receive routines. For this task, use only the blocking routines. If you have access to a parallel computer with the MPI library installed, run your parallel code using 4 processors. See if you get the same results as those obtained with the serial version of Chapter 2. Of course, you should.

  14. Solution • The results obtained from running this code are in the file "found.data" which contains the following: P 1, 62 P 2, 183 P 3, 271 P 3, 291 P 3, 296 • Notice that in the parallel version the master outputs to the file not only a target location but the rank of the processor that found it. • If you wish to confirm that these results are correct, you can run the parallel code shown above using the input file "b.data" from Chapter 2.

  15. Solution • Comment • The master waits in a DO WHILE loop to receive target locations from any slave sending a message with any tag. If the master receives a message from any slave with a tag of 52, that is the signal that that slave has finished its part of the search. The master uses the end_cnt variable to count how many slaves have sent the end tag: 52. When end_cnt equals 3 the master exits the DO WHILE loop and the program is over.

More Related