1 / 27

Derived Datatypes

Derived Datatypes. Research Computing UNC - Chapel Hill Instructor: Mark Reed Email : markreed@unc.edu. MPI Datatypes. MPI Primitive Datatypes MPI_Int, MPI_Float, MPI_INTEGER , etc. Derived Datatypes - can be constructed by four methods: contiguous vector indexed struct.

zeroun
Télécharger la présentation

Derived Datatypes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Derived Datatypes Research Computing UNC - Chapel Hill Instructor: Mark Reed Email: markreed@unc.edu

  2. MPI Datatypes MPI Primitive Datatypes MPI_Int, MPI_Float, MPI_INTEGER, etc. Derived Datatypes - can be constructed by four methods: contiguous vector indexed struct

  3. Derived Datatypes Roll your own … Create your own types to suit your application. convenience efficiency MyVeryOwnDataType

  4. Derived datatypes You can define new data structures based upon sequences of the MPI primitive data types, these are derived data types. Primitive data types are contiguous. Derived data types allow you to specify non-contiguous data in a convenient manner and to treat it as though it was contiguous. Derived datatypes can be used in all send and receive operations including collective.

  5. Type Maps A derived datatype specifies two things: A sequence of primitive datatypes A sequence of integer (byte) displacements, measured from the beginning of the buffer. Displacements are not required to be positive, distinct, or in increasing order. (however, negative displacements will precede the buffer) Order of items need not coincide with their order in memory, and an item may appear more than once.

  6. Type Map Primitive datatype 0 Displacement of 0 Primitive datatype 1 Displacement of 1 ... ... Primitve datatype n-1 Displacement of n-1

  7. Type Signature The sequence of primitive datatypes (i.e. displacements ignored) is the typesignature of the datatype. So a type map of {(double,0),(int,8),(char, 12)} has a type signature of {double, int, char}

  8. Extent The extent of a datatype is defined as: the span from the first byte to the last byte occupied by entries in this datatype (rounded up to satisfy alignment requirements) Example: Type={(double,0),(char,8)} i.e. offsets of 0 and 8 respectively. Now assume that doubles are aligned strictly at addresses that are multiples of 8 extent = 16 (9 rounds to next multiple of 8)

  9. MPI Derived Datatypes Datatype Interrogators • MPI_Type_extent (MPI_Datatype datatype, MPI_Aint *extent) • datatype - primitive or derived datatype • extent - returns extent of datatype in bytes • MPI_Type_size (MPI_Datatype datatype, int *size) • datatype - primitive or derived datatype • size - returns size in bytes of the entries in the type signature of datatype. • i.e. total size of a message with this datatype, thus gaps don’t contribute to size

  10. Committing datatypes MPI_Type_commit (MPI_Datatype *datatype) Required for all user defined datatypes before it can be used in communication Subsequently can use in any function call where an MPI_Datatype is specified

  11. Datatype Constructors Contiguous Vector Indexed Struct

  12. Contiguous MPI_Type_contiguous (int count, MPI_Datatype oldtype, MPI_Datatype *newtype) Contiguous is the simplest constructor. newtype is the datatype obtained by concatenating count copies of oldtype into contiguous locations. Concatenation is defined using extent (oldtype) as the size of the concatenated copies

  13. Vector MPI_Type_vector (int count, int blocklength, int stride, MPI_Datatype oldtype, MPI_Datatype *newtype) count - number of blocks blocklength - number of elements in each block stride - spacing between start of each block, measured as number of elements

  14. Vector Vector is a constructor that allows replication of a datatype into locations that consist of equally spaced blocks. Each block is obtained by concatenating the same number of copies of the old datatype. The spacing between blocks is a multiple of the extent of the old datatype.

  15. Vector oldtype Count = 3, Blocklength=2, Stride=3 newtype

  16. Hvector MPI_Type_hvector (int count, int blocklength, MPI_Aint stride, MPI_Datatype oldtype, MPI_Datatype *newtype) Same as vector except stride is measured in bytes rather than as a multiple of the oldtype extent. H is for heterogeneous

  17. Example: section of 2D array Send the gold blocks

  18. Example Code Real a(6,5), e(3,3) INTEGER oneslice, twoslice, sizeofreal, myrank, ierr INTEGER status(MPI_STATUS_SIZE) c extract the section a(1:6:2,1:5:2) and store in e … call MPI_TYPE_EXTENT(MPI_REAL, sizeofreal, ierr) c create datatype for a 1D section CALL MPI_TYPE_VECTOR (3,1,2,MPI_REAL,oneslice,ierr) c create dataype for a 2D section c Note: oneslice extent=5 blocks, and twoslice=29 call MPI_TYPE_HVECTOR (3,1,12*sizeofreal,oneslice,twoslice,ierr) call MPI_TYPE_COMMIT(twoslice, ierr) c send and receive on the same process call MPI_SENDRECV(a(1,1),1,twoslice,myrank,0,e,9, MPI_REAL,myrank,0,MPI_COMM_WORLD,status,ierr)

  19. Indexed The Indexed constructor allows one to specify a noncontiguous data layout where displacements between successive blocks need not be equal. This allows one to gather arbitrary entries from an array and send them in one message, or receive one message and scatter the received entries into arbitrary locations in an array. Hindexed version is available as well

  20. Struct MPI_TYPE_STRUCT is the mostgeneral type constructor. Generalizes MPI_TYPE_HINDEXED in that it allows each block to consist of replications of different datatypes. Intent is to allow descriptions of arrays of structures, as a single datatype.

  21. Deallocation MPI_Type_free (MPI_Datatype *datatype) MPI_TYPE_FREE marks the datatype object for deallocation and sets datatype to MPI_DATATYPE_NULL. Any communication that is currently using this datatype will complete normally. Derived datatypes that were defined from the freed datatype are not affected

  22. Pack and Unpack Can usually be avoided by using derived datatypes pack/unpack routines are provided for compatibility with previous libraries, e.g. PVM and Parmacs Provide some functionality that is not otherwise available in MPI. For instance, a message can be received in several parts, where the receive operation done on a later part may depend on the content of a former part

  23. Packing vs Derived Datatypes Use of derived datatypes is generally recommended The use of derived datatypes will often lead to improved performance: data copying can be avoided, and information on data layout can be reused, when the same communication buffer is reused. Packing may result in more efficient code in situations where the sender has to communicate to the receiver information that affects the layout of the receive buffer

  24. MPI_Pack int MPI_Pack(void* inbuf, int incount, MPI_Datatype datatype, void *outbuf, int outcount, int *position, MPI_Comm comm) inbuf - input buffer start incount - number of input data items datatype - datatype of each input data item outbuf - output buffer start outcount - output buffer size, in bytes position - current position in buffer, in bytes comm - communicator for packed message

  25. MPI_Unpack int MPI_Unpack(void* inbuf, int insize, int *position, void *outbuf, int outcount, MPI_Datatype datatype, MPI_Comm comm) inbuf - input buffer start insize - size of input buffer, in bytes position - current position in buffer, in bytes outbuf - output buffer start outcount - number of items to be unpacked datatype - datatype of each output data item comm - communicator for packed message

  26. Example: Mandelbrot Set zn = zn-1*zn-1 + c For Mandelbrot set z0 = 0; thus z1 = c; z2 = c*c + c If |zn| converges for c (i.e. < 2), then c is in the set and color it black, else color is chosen based on the value of n for which it diverges (exceeds 2) “naturally” or “embarrassingly” parallel app since calculations are independent but there is a problem … what is it?

  27. Mandlebrot Problem? Load-balancing! Solution: partition the data set into a large number of squares and denote one processor (master) to parcel them out View of Entire Mandlebrot set See pmandel.c by Ed Karrels in the mpich distribution

More Related