1 / 13

Process Groups & Communicators

Process Groups & Communicators. Communicator is a group of processes that can communicate with one another. Can create sub-groups of processes, or sub-communicators Cannot create process groups or sub-communicators from scratch

aletha
Télécharger la présentation

Process Groups & Communicators

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Process Groups & Communicators • Communicator is a group of processes that can communicate with one another. • Can create sub-groups of processes, or sub-communicators • Cannot create process groups or sub-communicators from scratch • Need to create sub-groups or sub-communicators from some existing groups or communicators • i.e. need to start from MPI_COMM_WORLD (or MPI_COMM_SELF)

  2. Sub-communicators MPI_Comm MPI_Group Process Group Communicator Process sub-group Sub-communicator

  3. Communicator  Process Group • MPI_Group is a handle representing a process group in C; in Fortran it is an integer • MPI_GROUP_EMPTY, predefined, no member in group • MPI_GROUP_NULL, predefined, an invalid handle • MPI_COMM_GROUP gets group associated with communicator int MPI_Comm_group(MPI_Comm comm, MPI_Group *group) MPI_COMM_GROUP(COMM, GROUP, IERROR) integer COMM, GROUP, IERROR ... MPI_Group group; MPI_Comm_group(MPI_COMM_WORLD, &group); ...

  4. Process Groups • MPI_Group_size(): number of processes in group • MPI_Group_rank(): rank of calling process in group; if does not belong to group, return MPI_UNDEFINED int MPI_Group_size(MPI_Group group, int *size) int MPI_Group_rank(MPI_Group group, int *rank) int ncpus, rank; MPI_Group MPI_GROUP_WORLD; ... MPI_Comm_group(MPI_COMM_WORLD, &MPI_GROUP_WORLD); MPI_Group_size(MPI_GROUP_WORLD, &ncpus); MPI_Group_rank(MPI_GROUP_WORLD, &rank); ...

  5. Group Constructor/Destructor int MPI_Group_incl(MPI_Group group, int n, int *ranks, MPI_Group *newgroup) int MPI_Group_excl(MPI_Group group, int n, int *ranks, MPI_Group *newgroup) int MPI_Group_free(MPI_Group *group); • MPI_Group_incl: create a new group newgroup consisting of n processes in group whose ranks are specified in *ranks; • MPI_Group_excl: create a new group newgroup consisting of processes from group excluding those n processes specified in *ranks; MPI_Group MPI_GROUP_WORLD, slave; int ranks[1]; ranks[0] = 0; MPI_Comm_group(MPI_COMM_WORLD, &MPI_GROUP_WORLD); MPI_Group_excl(MPI_GROUP_WORLD, 1, &ranks, &slave); ... MPI_Group_free(&slave);

  6. Process Group  Communicator • Only communicators can be used in communication routines • MPI_Comm_create(): create a communicator from a process group • group is a sub-group of that associated with comm • Collective operation int MPI_Comm_create(MPI_Comm comm, MPI_Group group, MPI_Comm *newcomm) MPI_Group MPI_GROUP_WORLD, slave; MPI_Comm slave_comm; int ranks[1]; Ranks[0] = 0; MPI_Comm_group(MPI_COMM_WORLD, &MPI_GROUP_WORLD); MPI_Group_excl(MPI_GROUP_WORLD, 1, &ranks, &slave); MPI_Comm_create(MPI_COMM_WORLD, slave, &slave_comm); ... MPI_Group_free(&slave); MPI_Comm_free(&slave_comm);

  7. Communicator Constructors • MPI_Comm_dup(): duplicate communicator • Same group of processes, but different communication context • Communications sent using duplicated comm cannot be received with original comm. • Used for writing parallel libraries; will not interfere with user’s communications • Collective routine; All processes must call same function int MPI_Comm_dup(MPI_Comm comm, MPI_Comm *newcomm)

  8. Communicator Constructors int MPI_Comm_split(MPI_Comm comm, int color, int key, MPI_Comm *newcomm) • Partition comm into disjoint sub-communicators, one for each value color • Processes providing same color values will belong to the same new communicator newcomm • Within newcomm, processes ranked based on key values that are provided • color must be non-negative. • If MPI_UNDEFINED is provided in color, MPI_COMM_NULL will be returned in newcomm. int rank, color; MPI_Comm newcomm; MPI_Comm_rank(MPI_COMM_WORLD, &rank); color = rank%3; MPI_Comm_split(MPI_COMM_WORLD, color, rank, &newcomm); ...

  9. Process Virtual Topology • Linear ranking (0, 1, …, N-1) often does not reflect logical communication patterns of processes • Desirable to arrange processes logically to reflect the topological patterns underlying the problem geometry or numerical algorithm, e.g. 2D or 3D grids • Virtual topology: this logical process arrangement • Virtual topology does not reflect machine’s physical topology • Can build virtual topology on communicators

  10. Virtual Topology • Communication patterns can be described by a graph in general • Nodes stand for processes • Edges connect processes that communicate with each other • Many applications use process topologies like rings, 2D or high-D grids or tori. • MPI provides 2 topology constructors • Cartesian topology • General graph topology

  11. Cartesian Topology • n-dimensional Cartesian topology • (m1, m2, m3, …, mn) grid; mi is the number of processes in i-th direction • A process can be represented by its coordinate (j1, j2, j3, …, jn), where 0<=js<=ms • Constructor: MPI_Cart_create() • Translation: • Process rank  coordinate • Coordinate  process rank

  12. Cartesian Constructor Int MPI_Cart_create(MPI_Comm comm_old, int ndims, int *dims, int *periods, int reorder, MPI_Comm *comm_cart) • comm_old: existing communicator • ndims: dimension of Cartesian topology • dims: vector, dimension [ndims], number of processes in each direction • periods: vector, dimension [ndims], true or false, priodic or not in each direction • reorder: truemay reorder rank; false, no re-ordering of ranks • comm_cart: new communicator with Cartesian topology MPI_Comm comm_new; int dims[2], periods[2], ncpus; MPI_Comm_size(MPI_COMM_WORLD, &ncpus); dims[0] = 2; dims[1] = ncpus/2; // assume ncpus dividable by 2 periods[0] = periods[1] = 1; MPI_Cart_create(MPI_COMM_WORLD, 2, dims, periods, 0, &comm_new); ...

  13. Topology Inquiry Functions int MPI_Cartdim_get(MPI_Comm comm, int *ndims) int MPI_Cart_coords(MPI_Comm comm, int rank, int maxdims, int *coords) int MPI_Cart_rank(MPI_Comm, int *coords, int *rank) • MPI_Cartdim_get returns the dimension of Cartesian topology • MPI_Cart_coords returns the coordinates of a process of rank • MPI_Cart_rank returns of the rank of a process with coordinate *coords MPI_Comm comm_cart; int ndims, rank, *coords; ... // create Cartesian topology on comm_cart MPI_Comm_rank(comm_cart, &rank); MPI_Cartdim_get(comm_cart, &ndims); coords = new int[ndims]; MPI_Cart_coords(comm_cart, rank, ndims, coords); // coords contains coord ... for(int i=0;i<ndims;i++) coords[i] = 0; MPI_Cart_rank(comm_cart, coords, &rank);

More Related