html5-img
1 / 66

NCKU HPC Education & Training

NCKU HPC Education & Training. 2008.03.27. Jih-Ching Chang bluesun@mail.ncku.edu.tw. Outline. HPC Introduction Easy User Guide of NCKU HPC MPI. Shared Memory VS Distributed Memory OpenMP MPI. Infrastructure and Architecture. IB. 1GE. FC. TANET. Hardware.

teneil
Télécharger la présentation

NCKU HPC Education & Training

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. NCKU HPC Education & Training 2008.03.27 Jih-Ching Chang bluesun@mail.ncku.edu.tw

  2. Outline • HPC Introduction • Easy User Guide of NCKU HPC • MPI

  3. Shared Memory VS Distributed Memory OpenMP MPI

  4. Infrastructure and Architecture

  5. IB 1GE FC TANET Hardware 計算節點 Computing Node (Sun X2200 M2*128) 檔案伺服器一 (Sun X4500 *2 : 48TB) 前端伺服器(Sun X4200*2) Voltaire 9288 Infiniband 管控伺服器(Sun X4200*2 ) 檔案伺服器二 User Data (Sun ST6140:8TB) NCKU Campus Ethernet Core Switch 檔案伺服器三 - MDS Server (Lustre Parallel File System) (Sun ST6140:8TB)

  6. Sun Studio PGI Compilers PGI Debugger PGP rof profiler Software stack

  7. 2) Notify Qmaster Schedd 3) Job Placement 4) Dispatch 1) Submit Execd 8) Record 7) Inform when done Execd qsub 6) Control 5) Load Report Execd Accounting 工作安排流程

  8. System Components

  9. Easy User Guide of NCKU HPC • Login • Basic instructions • How to submit job

  10. Test Account • Account test01~test70 • Password the same as account

  11. Login • Linux—SSH • MS Windows—putty, pietty http://www.csie.ntu.edu.tw/~piaip/pietty/stable/pietty0327.exe

  12. File Management • WinSCP http://winscp.net/eng/docs/lang:cht

  13. 環境變數設定 (設定完需重新登入) • 範例檔 /ap/example_bashrc 先ssh到computing node (n121~n128) $ssh n121 複製到user的目錄下使用 $cp /ap/example_bashrc ~user/.bashrc 下載本次課程的範例程式 $wget http://140.116.206.34/qsub.tar 解壓縮 $tar -xvf qsub.tar

  14. How to submit job--Serial job

  15. How to submit job--Parallel job

  16. Compiler • GNU • gcc • g++ • g77,f77 • Intel • icc • ifc • MPI • mpicc • mpicxx • mpif77,mpif90

  17. Instruction • Compile $g++ Hello.c –o Hello.x • Execute $./Hello.x >> output • Submit job $qsub serialjob.sh $qstat

  18. MPI Introduction • Message Passing Interface Forum define standard ~ 60 members 1992/04 begin • Available standards: 1994/05/05 MPI 1.0 1995/06/12 MPI 1.1 1997/07/18 MPI 1.2 1997/07/18 MPI 2.0

  19. MPI Introduction(2) • Free Software MPICH http://www-unix.mcs.anl.gov/mpi/mpich LAM/MPI http://www.lam-mpi.org

  20. MPI Introduction(3) • 平行計算 將計算工作切割為n 等份,在n 個CPU上分工合作完成整個計算工作,每個CPU負責其中一個等份的計算工作 • 切割方法: *功能切割: 特殊情況下採用 *資料切割: 一般情況下採用 • 範例程式下載 $wget http://140.116.206.34/mpi.zip $unzip mpi.zip

  21. Program Structure

  22. MPI Basic • Required header file #include <mpi.h> #include "mpi.h“ • Initializing MPI • Must be the first routine called and only once int MPI_Init(int *argc, char ***argv) • Communicator Size • How many processes are contained within a communicator? MPI_Comm_size(MPI_Comm comm, int *size)

  23. MPI Basic(2) • Process Rank • Process ID number within the communicator • Starts with zero and goes to (n-1) where n is the number of processes requested • Used to identify the source and destination of messages MPI_Comm_rank(MPI_Comm comm, int *rank) • Exiting MPI • Must be called last by “all” processes MPI_Finalize()

  24. Hello.c

  25. Instruction • Compile $mpicxx program.cpp –o program.x • Execute $mpirun –np 4 –machinefile HOST program.x • Queue $qsub parallel.sh $qstat

  26. Point to Point Communication • MPI_Send • bufinitial address of send buffer (choice) • countnumber of elements in send buffer (nonnegative integer) • datatypedatatype of each send buffer element (handle) • destrank of destination (integer) • tagmessage tag (integer) • comm communicator (handle)

  27. Point to Point Communication(2) • MPI_Recv • buf initial address of receive buffer (choice) Output Parameter • countnumber of elements in receivebuffer (nonnegative integer) • datatypedatatype of each receive buffer element (handle) • sourcerank of source (integer) • tagmessage tag (integer) • commcommunicator (handle) • statusstatus object (Status)

  28. C語言常用的MPI基本資料類別

  29. Collective Communication • MPI_Scatter • sendbufaddress of send buffer(choice, significant only at root) • sendcntnumber of elements sent to each process (integer,significant only at root) • sendtypedata type of send buffer elements (significant only at root) • recvbufaddress of receive buffer (choice) • recvcountnumber of elements in receive buffer (integer) • recvtypedata type of receive buffer elements (handle) • rootrank of sending process (integer) • commcommunicator (handle)

  30. Collective Communication (2) • MPI_Gather • sendbuf address of send buffer (choice) • sendcntnumber of elements sent to each process (integer) • sendtypedata type of send buffer elements (handle) • recvbufaddress of receive buffer • recvcountnumber of elements in receive buffer (integer, significant only at root) • recvtypedata type of receive buffer elements (handle) (significant only at root) • rootrank of receiving process (integer) • commcommunicator (handle)

  31. Collective Communication (3)

  32. Collective Communication(4) • MPI_Bcast • bufinitial address of send buffer (choice) • countnumber of elements in send buffer (nonnegative integer) • datatypedatatype of each send buffer element (handle) • rootrank of broadcast root (integer) • commcommunicator (handle)

  33. MPI_Bcast

  34. Collective Communication(5) • MPI_Reduce • sendbufaddress of send buffer (choice) • recvbufaddress of receive buffer (choice, significant only at root) • countnumber of elements in send buffer (integer) • datatypedatatype of each send buffer element (handle) • opreduce operation (handle) • rootrank of root process (integer) • commcommunicator (handle)

  35. MPI_Reduce

  36. MPI_Allreduce

  37. MPI_Reduce Function

  38. Pi.c

  39. Pi.c (2)

  40. Pi.c (3)

  41. Pi.c (4)

  42. Advanced Exercise—Matrix Operation • T2SEQ—sequential • T2CP—SPMD (Single Program Multiple Data) • T2DCP—計算及資料都切割 • T3SEQ有邊界資料交換 • T3DP • T3DCP

  43. T2SEQ

  44. T2CP

  45. T2CP (1)

  46. T2CP (2)

  47. T2CP (3)

  48. T2CP (4)

  49. T2DCP (1)

  50. T2DCP (2)

More Related