1 / 30

Parallel Programming in MPI part 2

Parallel Programming in MPI part 2. 1. Today's Topic. ノンブロッキング通信 Non-Blocking Communication 通信の完了を待つ間に他の処理を行う Execute other instructions while waiting for the completion of a communication. 集団通信関数の実装 Implementation of collective communications

Télécharger la présentation

Parallel Programming in MPI part 2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Parallel Programming in MPIpart 2 1

  2. Today's Topic ノンブロッキング通信Non-Blocking Communication 通信の完了を待つ間に他の処理を行うExecute other instructions while waiting for the completion of a communication. 集団通信関数の実装 Implementation of collective communications MPIプログラムの時間計測Measuring execution time of MPI programs デッドロック Deadlock

  3. Today's Topic ノンブロッキング通信Non-Blocking Communication 通信の完了を待つ間に他の処理を行うExecute other instructions while waiting for the completion of a communication. 集団通信関数の実装 Implementation of collective communications MPIプログラムの時間計測Measuring execution time of MPI programs デッドロック Deadlock

  4. ノンブロッキング通信関数 Non-blocking communication functions • ノンブロッキング = ある命令の完了を待たずに次の命令に移るNon-blocking = Do not wait for the completion of an instruction and proceed to the next instruction • Example) MPI_Irecv& MPI_Wait Blocking Non-Blocking MPI_Recv MPI_Irecv Proceed to the next instruction without waiting for the data next instructions Wait for the arrival of data data data MPI_Wait next instructions

  5. MPI_Irecv Usage: int MPI_Irecv(void *b, int c, MPI_Datatype d, int src, int t, MPI_Comm comm, MPI_Request *r); • Non-Blocking Receive • Parameters:start address for storing received data,number of elements, data type,rank of the source, tag (= 0, in most cases), communicator (= MPI_COMM_WORLD, in most cases),request • request: 通信要求 Communication Request • この通信の完了を待つ際に用いるUsed for Waiting completion of this communication • Example)MPI_Requestreq; ...MPI_Irecv(a, 100, MPI_INT, 0, 0, MPI_COMM_WORLD, &req); ...MPI_Wait(&req, &status); 5 5

  6. MPI_Isend Usage: int MPI_Isend(void *b, int c, MPI_Datatype d, int dest,int t, MPI_Comm comm, MPI_Request *r); • Non-Blocking Send • Parameters:start address for sending data,number of elements, data type,rank of the destination, tag (= 0, in most cases), communicator (= MPI_COMM_WORLD, in most cases),request • Example)MPI_Requestreq; ...MPI_Isend(a, 100, MPI_INT, 1, 0, MPI_COMM_WORLD, &req); ...MPI_Wait(&req, &status); 6 6

  7. Non-Blocking Send? • Blocking send (MPI_Send):送信データが別の場所にコピーされるのを待つ Wait for the data to be copied to somewhere else. • ネットワークにデータを送出し終わるか、一時的にデータのコピーを作成するまで。Until completion of the data to be transferred to the network or, until completion of the data to be copied to a temporal memory. • Non-Blocking send (MPI_Recv):待たない

  8. Notice: ノンブロッキング通信中はデータが不定Data is not sure in non-blocking communications • MPI_Irecv: • 受信データの格納場所と指定した変数の値は MPI_Waitまで不定Value of the variable specified for receiving data is not fixed before MPI_Wait A arriveddata MPI_Irecvto A 10 ... ~ = A ... A 50 Value of A at herecan be 10 or 50 50 MPI_Wait Value of A is 50 ~ = A

  9. Notice: ノンブロッキング通信中はデータが不定Data is not sure in non-blocking communications • MPI_Isend: • 送信データを格納した変数を MPI_Waitより前に書き換えると、実際に送信される値は不定If the variable that stored the data to be sent is modified before MPI_Wait, the value to be actually sent is unpredictable. A MPI_Isend A Modifying value of A here causes incorrect communication 10 data sent ... A = 50 ... A 10 or 50 50 MPI_Wait You can modify value of A at here without any problem A = 100

  10. MPI_Wait Usage: int MPI_Wait(MPI_Request *req, MPI_Status *stat); • ノンブロッキング通信(MPI_Isend、MPI_Irecv)の完了を待つ。Wait for the completion of MPI_Isend or MPI_Irecv • 送信データの書き換えや受信データの参照が行えるMake sure that sending data can be modified,or receiving data can be referred. • Parameters:request, status • status:MPI_Irecv完了時に受信データの statusを格納The status of the received data is stored at the completion of MPI_Irecv

  11. MPI_Waitall Usage: int MPI_Waitall(int c, MPI_Request *requests, MPI_Status *statuses); • 指定した数のノンブロッキング通信の完了を待つWait for the completion of specified number of non-blocking communications • Parameters:count, requests, statuses • count:ノンブロッキング通信の数The number of non-blocking communications • requests, statuses:少なくとも count個の要素を持つ MPI_Requestと MPI_Statusの配列Arrays of MPI_Request or MPI_Status that consists at least 'count' number of elements.

  12. Today's Topic ノンブロッキング通信Non-Blocking Communication 通信の完了を待つ間に他の処理を行うExecute other instructions while waiting for the completion of a communication. 集団通信関数の実装 Implementation of collective communications MPIプログラムの時間計測Measuring execution time of MPI programs デッドロック Deadlock

  13. 集団通信関数の中身Inside ofthe functions of collective communications • 通常,集団通信関数は,MPI_Send, MPI_Recv, MPI_Isend, MPI_Irecv等の一対一通信で実装されるUsually, functions of collective communications are implemented by using message passing functions.

  14. Inside of MPI_Bcast int MPI_Bcast(char *a, int c, MPI_Datatype d, int root, MPI_Comm comm) { int i, myid, procs; MPI_Status st; MPI_Comm_rank(comm, &myid); MPI_Comm_rank(comm, &procs); if (myid == root){ for (i = 0; i < procs) if (i != root) MPI_Send(a, c, d, i, 0, comm); } else{ MPI_Recv(a, c, d, root, 0, comm, &st); } return 0; } • One of the most simple implementations

  15. Another implementation: With MPI_Isend int MPI_Bcast(char *a, int c, MPI_Datatype d, int root, MPI_Comm comm) { int i, myid, procs, cntr; MPI_Status st, *stats; MPI_Request *reqs; MPI_Comm_rank(comm, &myid); MPI_Comm_rank(comm, &procs); if (myid == root){ stats = (MPI_Status *)malloc(sizeof(MPI_Status)*procs); reqs = (MPI_Request *)malloc(sizeof(MPI_Request)*procs); cntr = 0; for (i = 0; i < procs) if (i != root) MPI_Isend(a, c, d, i, 0, comm, &(reqs[cntr++])); MPI_Waitall(procs-1, reqs, stats); free(stats); free(reqs); } else{ MPI_Recv(a, c, d, root, 0, comm, &st); } return 0; }

  16. Another implementation: Binomial Tree int MPI_Bcast(char *a, int c, MPI_Datatype d, int root, MPI_Comm comm) { int i, myid, procs; MPI_Status st; int mask, relative_rank, src, dst; int tag = 1, success = 0; MPI_Comm_rank(comm, &myid); MPI_Comm_rank(comm, &procs); relative_rank = myid - root; if (relative_rank < 0) relative_rank += procs; mask = 1; while (mask < num_procs){ if (relative_rank & mask){ src = myid - mask; if (src < 0) src += procs; MPI_Recv(a, c, d, src, 0, comm, &st); break; } mask <<= 1; } mask >>= 1; while (mask > 0){ if (relative_rank + mask < procs){ dst = myid + mask; if (dst >= procs) dst -= procs; MPI_Send (a, c, d, dst, 0, comm); } mask >>= 1; } return 0; }

  17. Flow of Binomial Tree • Use 'mask' to determine when and how to Send/Recv Rank 0 Rank 1 Rank 2 Rank 3 Rank 4 Rank 5 Rank 6 Rank 7 mask = 1 mask = 1 mask = 1 mask = 1 mask = 1 mask = 1 mask = 1 mask = 1 mask = 2 mask = 2 mask = 2 mask = 2 Recv from 6 Recv from 4 Recv from 0 Recv from 2 mask = 4 mask = 4 Recv from 4 Recv from 0 Recv from 0 mask = 4 Send to 4 mask = 2 Send to 6 mask = 2 mask = 1 mask = 1 Send to 2 Send to 5 Send to 7 mask = 1 Send to 3 mask = 1 Send to 1

  18. Today's Topic ノンブロッキング通信Non-Blocking Communication 通信の完了を待つ間に他の処理を行うExecute other instructions while waiting for the completion of a communication. 集団通信関数の実装 Implementation of collective communications MPIプログラムの時間計測Measuring execution time of MPI programs デッドロック Deadlock

  19. MPIプログラムの時間計測 Measure the time of MPI programs MPI_Wtime 現在時間(秒)を実数で返す関数 Returns the current time in seconds. Example) ... double t1, t2; ... t1 = MPI_Wtime();   処理 t2 = MPI_Wtime(); printf("Elapsed time: %e sec.\n", t2 – t1); Measuretime here

  20. 並列プログラムにおける時間計測の問題Problem on measuring time in parallel programs Rank 0 Rank 1 t1 = MPI_Wtime(); Rank 2 Read t1 = MPI_Wtime(); Measuretime here Read t1 = MPI_Wtime(); Receive Send Receive Read t1 = MPI_Wtime(); Send t1 = MPI_Wtime(); t1 = MPI_Wtime(); プロセス毎に違う時間を測定: どの時間が本当の所要時間か?Each process measures different time. Which time is the time we want? 20

  21. 集団通信 MPI_Barrierを使った解決策 Use MPI_Barrier Rank 0 MPI_Barrier Rank 1 MPI_Barrier Rank 2 MPI_Barrier t1 = MPI_Wtime(); Receive Receive Read Measuretime here Read Send MPI_Barrier Read Send MPI_Barrier MPI_Barrier t1 = MPI_Wtime(); • 時間計測前にMPI_Barrierで同期 Synchronize processes before each measurement • For measuring total execution time. 21

  22. より細かい解析 Detailed analysis • Average • MPI_Reduce can be used to achieve the average: • MAX and MIN • Use MPI_Gather to gather all of the results to Rank 0. • Let Rank 0 to find MAX and MIN double t1, t2, t, total; t1 = MPI_Wtime(); ... t2 = MPI_Wtime(); t = t2 – t1; MPI_Reduce(&t, &total, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD); if (myrank == 0) printf("Ave. elapsed: %e sec.\n", total/procs);

  23. 最大(Max)、平均(Ave)、最小(Min)の関係 Relationships among Max, Ave and Min • プロセス毎の負荷(仕事量)のばらつき検証に利用 Can be used for checking the load-balance. Time includes Computation Time and Communication Time

  24. 通信時間の計測 Measuring time for communications double t1, t2, t3, t4 comm=0; t3 = MPI_Wtime(); for (i = 0; i < N; i++){computation t1 = MPI_Wtime();communication t2 = MPI_Wtime(); comm += t2 – t1; computation t1 = MPI_Wtime();communication t2 = MPI_Wtime(); comm += t2 – t1; } t4 = MPI_Wtime();

  25. Analyze computation time • Computation time = Total time - Communication time • Or, just measure the computation time • 計算時間のばらつき = 負荷の不均衡の度合いBalance of computation time shows balance of the amount of computation • 注意: 通信時間には、負荷の不均衡によって生じた待ち時間が含まれるので、単純な評価は難しい Communication time is difficult to analyze since it consists waiting time caused by load-imbalance. ==> Balance computation first.

  26. Today's Topic ノンブロッキング通信Non-Blocking Communication 通信の完了を待つ間に他の処理を行うExecute other instructions while waiting for the completion of a communication. 集団通信関数の実装 Implementation of collective communications MPIプログラムの時間計測Measuring execution time of MPI programs デッドロック Deadlock

  27. Deadlock 何らかの理由で、プログラムを進行させることができなくなった状態A status of a program in which it cannot proceed by some reasons. MPIプログラムでデッドロックが発生しやすい場所:Places you need to be careful for deadlocks:1. MPI_Recv, MPI_Wait, MPI_Waitall 2. Collective communications  全部のプロセスが同じ集団通信関数を実行するまで先に進めないA program cannot proceed until all processes call the same collective communication function Wrong case: One solution: use MPI_Irecv if (myid == 0){ MPI_Recv from rank 1 MPI_Send to rank 1 } if (myid == 1){ MPI_Recv from rank 0 MPI_Send to rank 0 } if (myid == 0){ MPI_Irecv from rank 1 MPI_Send to rank 1 MPI_Wait } if (myid == 1){ MPI_Irecv from rank 0 MPI_Send to rank 0 MPI_Wait }

  28. Summary • 並列プログラムの作成には, 計算の分割,データの分割,通信が必要Parallel programs need distribution of computation, distribution of data and communications. • 並列化で必ず高速化できるとは限らないParallelization does not always speed up programs. • 並列化出来ないプログラムがあるThere are non-parallelizable programs • 並列プログラムではデッドロックに注意Be careful about deadlocks.

  29. Report) Make Reduce function by yourself • 次のページのプログラムの my_reduce関数の中身を追加してプログラムを完成させるFill the inside of 'my_reduce' function in the programshown in the next slide • my_reduce: MPI_Reduceの簡略版 Simplified version of MPI_Reduce • 整数の総和のみ. ルートランクは 0限定.コミュニケータは MPI_COMM_WORLDCalculates total sum of integer numbers. The root rank is always 0.The communicator is always MPI_COMM_WORLD. • アルゴリズムは好きなものを考えてよいAny algorithm is OK.

  30. #include <stdio.h> #include <stdlib.h> #include "mpi.h" #define N 20 int my_reduce(int *a, int *b, int c) { return 0; } int main(int argc, char *argv[]) { int i, myid, procs; int a[N], b[N]; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &myid); MPI_Comm_size(MPI_COMM_WORLD, &procs); for (i = 0; i < N; i++){ a[i] = i; b[i] = 0; } my_reduce(a, b, N); if (myid == 0) for (i = 0; i < N; i++) printf("b[%d] = %d , correct answer = %d\n", i, b[i], i*procs); MPI_Finalize(); return 0; } complete here by yourself

More Related