1 / 7

MPI Collective Communication and Comm Split in Psmile

Understand the use of MPI collective calls and the importance of calling MPI_comm_split in Psmile. Learn about MPI_Init, MPI_Comm_Spawn, MPI_Comm_Group, and point-to-point synchronous and asynchronous communications.

mccraw
Télécharger la présentation

MPI Collective Communication and Comm Split in Psmile

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Collective calls of MPI have to be called by each process included in the communicator appearing in the routine MPI  so when the driver (or the psmile) makes a collective call within the global communicator, a similar call has to be put in the psmile (or the driver), else the processes calling the collective routine will wait (see mpi_allreduce and mpi_bcast in prismdrv_init_appl and Psmile_init_mpi1) MPI_comm_split(comm_old (in), color, key, comm_new(out), ierror) : all procs with same color are in the same comm_new, with rank key (making the key value zero for all processes of a given color means that one doesn't really care about the rank-order of the processes in the new communicator) MPI_comm_split(comm_old, 1, 0, comm_new, ierror) : to create the comm comm_psmile applies only to applications If color = MPI_undefined for one of the processes, it is excluded from the new comm

  2. Include ‘mpif.h’ MPI_Init (ierror) : must be called before any other MPI routine are called except MPI_Initialized that is used to know if a process has called MPI_init. All programs MPI must contain a call to MPI_Init MPI_Comm_Spawn (char*command,char*arg,int*maxprocs,MPI_INFO_NULL,int*root, int*intracomm, int*intercomm, int*errors)

  3. To construct a communicator, you can use another communicator or a group of processors. The communicator created by default is MPI_COMM_WORLD. It is created with MPI_Init and killed with MPI_Finalize. A communicator is associated to a group and MPI_Comm_Group is used to know this group : MPI_comm_group(comm(in), group(out), ierror) : associated with a communicator Is its group identity or handle (here group). MPI_Group_translate_ranks ( MPI_Group group_a, int n, int *ranks_a, MPI_Group group_b, int *ranks_b, ierror ) : (IN) groupa : groupa (handle) (IN) n number of ranks in ranks1 and ranks2 arrays (IN) (integer) ranksa array of zero or more valid ranks in groupa (IN) groupb : groupb (handle) (OUT) ranksb array of corresponding ranks in group2, (MPI_UNDEFINED when No correspondence exists).

  4. A message in MPI == (data + envelope) Point to point synchronous communications, blocking : MPI_SEND(buf,count,DTYPE, dest,tag,comm,ierr) : info on the destination is given MPI_RECV(buf,count,DTYPE, source,tag,comm,status,ierr) : info from which proc it comes is given Définition du mot SYNCHRONE : se dit d'une communication lorsque l'émetteur et le récepteur fonctionnent au rythme d'une horloge commune, réglée au début de la communication.

  5. A message in MPI == (data + envelope) Point to point asynchronous communications, non blocking (I=immediate): MPI_I(R,S,B)SEND(buf,count,DTYPE,dest,tag,comm,request,ierr) : info on the destination is given MPI_IRECV(buf,count,DTYPE,source,tag,comm,request,ierr) : info from which proc it comes is given A receive request can be determined being completed by calling the MPI_Wait, MPI_Waitany, MPI_Test, or MPI_Testany with request returned by this function Définition du mot ASYNCHRONE : désigne un type d’échange de données entre deux machines où les données échangées sont émises et analysée selon une référence de temps différente et un rythme variable.

  6. MPI_REDUCE(sendbuf,recvbuf,count,datatype,operation,root,comm,ierr) : Reduces values on all processes to a single value, following the operation “operation” defined above. The result of the operation on sendbuf is in recvbuf on proc root. MPI_ALLREDUCE(sendbuf,recvbuf,count,datatype,operation,comm,ierr) : Reduces values on all processes to a single value, following the operation “operation” defined above. The result of the operation on sendbuf is in recvbuf redistributed to all procs. MPI_BCAST(buf,count,datatype,root,comm,ierr) : Broadcasts (sends) a message from the process with rank "root" to all other processes of the group in the comm. MPI_GATHER(sndbuf, sndcount, sndtype, rcvbuf, rcvcount,rcvtype, root, comm, ierr) :

  7. MPI_WAITALL(int count,MPI_request, MPI_status, ierr) : blocks until all communications associated with active handles in the list complete, and return the status of all these operations. Both arrays MPI_request and MPI_status have the same number of valid entries

More Related