1 / 26

Message Passing Libraries

Message Passing Libraries. Abdelghani Bellaachia Computer Science Department George Washington University Washington, DC 20052. Objectives. Large scientific applications scale to 100’s of processors (routinely) and 1000’s of processors (in rare cases) Climate/Ocean modeling

olsond
Télécharger la présentation

Message Passing Libraries

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Message Passing Libraries Abdelghani Bellaachia Computer Science Department George Washington University Washington, DC 20052

  2. Objectives • Large scientific applications scale to 100’s of processors (routinely) and 1000’s of processors (in rare cases) • Climate/Ocean modeling • Molecular physics (QCD, dynamics, materials, …) • Computational Fluid Dynamics • And many more … • To create concurrent and parallel applications. • A message-passing library to explicitly tell each processor what to do and provides a mechanism to transfer data between processes.

  3. Massage Passing Approach • Large parallel programs need well-defined mechanisms to coordinate and exchange info • Communication accomplished by message passing • Message passing allows two processes to: • Exchange information • Synchronize with each other

  4. Hello World – MP Style • Process A • Initialize • Send(B, “Hello World”) • Recv(B, String) • Print String • “Hi There” • Finalize • Process B • Initialize • Recv(A, String) • Print String • “Hello World” • Send(A, “Hi There”) • Finalize

  5. PVM • Runs on a variety of Unix machines • Local and wide-area or a combination of networks • The application decides where and when its components are executed. • The application determines its own control and dependency structure. • Applications can be written in C, Fortran, and Java • Components: • PVM daemon process (pvmd3) • Library interface routines for Fortran and C • Libpvm3.a, libfpvm3.a, libgpvm3.a • PVM style packing and unpacking data is generally avoided by the use of an MPI datatype being defined.

  6. PVM daemon (pvmd3) • A process which oversees the operation of user processes within a PVM application and coordinates inter-machine PVM communications • One daemon runs on each machine configured into your parallel virtual machine • "Master (local) - remote" control scheme for daemons • Each daemon maintains a table of configuration and handles information relative to your parallel virtual machine • Processes communicate with each other through the daemons • They talk to their local daemon via the library interface routines • Local daemon then sends/receives messages to/from remote host daemons

  7. PVM Libraries • libpvm3.a : • Library of C-language interface routines • libfpvm3.a : • Additional library for Fortran codes • libgpvm3.a : • Required for use with dynamic groups (hosts can be added or deleted in user defined groups at any time.)

  8. Typical Subroutine Calls • initiates and terminate processes • pack, send and receive messages • synchronize via barriers • query and dynamically change configuration of the parallel virtual machine

  9. Application Requirements • Network connected computers • PVM daemon: Built for each architecture and installed on each machine • PVM libraries: Built and installed for each architecture • Your application specific files • program components • PVM hostfile (defines which physical machines comprise your virtual parallel machine • Other libraries required by your program

  10. Process Creation • C: • numt = pvm_spawn (task, argv, flag, where, ntask, tids) • where: • task = the character name of the executable file • flag = integer specifying spawning options flag=PVMDEFAULT lets PVM choose processors to start process • argv = pointer to arguments for executable • where = used to choose processors, Not used if flag set to PVMDEFAULT • ntasks = number of copies of the executable to start • tids = integer number of process returned by routine • numt = number of processes started, if < 0 error

  11. Process termination • Tells the pvmd that the process is leaving PVM • In C: info pvm_exit () • where info is the integer status code returned from routine (values < 0 indicates error )

  12. Sending a Message • Initialize a buffer to use for sending • Pack various types of data into that buffer • Send the buffer contents to a designated location or locations • Initialize a buffer to use for sending: • Clears the send buffer and prepares it for packing a new message • In C: init bufid = pvm_initsend (int encoding) • Where: encoding is the encoding scheme name: PVMDataDefault used for data conversion between different architectures.

  13. Sending a Message • Pack various types of data into that buffer pvm_pkdatatype (datapointer, numberofitems, stride) where: int info = pvm_pkint ( int *ip, int nitem,int stride ) int info = pvm_pkbyte ( char *xp, int nitem,int stride ) int info = pvm_pkcplx ( float *cp, int nitem,int stride ) int info = pvm_pkdouble ( double *db, int nitem,int stride ) int info = pvm_pkfloat ( float *fp, int nitem,int stride ) int info = pvm_pklong ( long *ip, int nitem,int stride ) int info = pvm_pkshort ( short *jp, int nitem,int stride ) int info = pvm_pkpkstr ( char *sp )

  14. Sending a Message • Send the buffer contents to a designated location or locations: In C: int info = pvm_send (int tid, int msgtag) where tid = integer number of receiving process msgtag = message tag (can use 1 if do not care to use message tags) info = integer status code returned from routine (value < 0 indicates error)

  15. Receiving A Message • When receiving a message there are 2 options: • blocking - wait for a message (pvmrecv) • Non blocking - get a message only if its pending otherwise continue on (pvmnrecv) • Only Blocking Receives will be discussed • Receive a message into a buffer • Unpack the buffer into a variable

  16. Receiving A Message • Receive a message into a buffer In C: int bufid = pvm_recv (int tid, int msgtag ) where • tid = integer number of process that sent the message • msgtag = message tag (can use 1 if do not care to use message tags) • bufid = integer value of new active receive buffer indentifier integer value < 0 indicates error

  17. Receiving A Message • Unpack buffer • Unpack in same order that message was packed • In C: int info = pvmupkdatatype (datapointer, numberofitems, stride) see pack for variable definitions

  18. PVM Collective Communications • pvm_bcast: Asynchronously broadcasts the data in the active send buffer to a group of processes. The broadcast message is not sent back to the sender. • pvm_gather: A specified member receives messages from each member of the group and gathers these messages into a single array. All group members must call pvm_gather(). • pvm_scatter: Performs a scatter of data from the specified root to each of the members of the group, including itself. All group members must call pvm_scatter(). Each receives a portion of the data array from the root in their local result array. • pvm_reduce: Performs a reduce operation over members of the group. All group members call it with their local data, and the result of the reduction operation appears on the root. Users can define their own reduction functions or the predefined PVM reductions

  19. Example: Master-Slave Architecture • Writing a Simple Master-Slave Code • Master code will send the message "HELLO WORLD" to the workers • Worker code will receive the message and print it. #include <stdio.h> #include “pvm3.h” #define NTASKS 6 #define HELLO_MSGTYPE 1 char helloworld[13] = "HELLO WORLD!"; main() { int mytid, tids[NTASKS], i, msgtype, rc, bufid; /* char helloworld[13] = "HELLO WORLD!"; */ for (i=0; i<NTASKS; i++) tids[i] = 0; printf("Enrolling master task in PVM...\n"); mytid = pvm_mytid(); bufid = pvm_catchout(stdout); printf("Spawning worker tasks ...\n"); for (i=0; i<NTASKS; i++) { rc = pvm_spawn("hello.worker", NULL, PvmTaskDefault, "", 1, &tids[i]); printf(" spawned worker task id = %8x\n", tids[i]); } printf("Sending message to all worker tasks...\n"); msgtype = HELLO_MSGTYPE; rc = pvm_initsend(PvmDataDefault); rc = pvm_pkstr(helloworld); for (i=0; i<NTASKS; i++) rc = pvm_send(tids[i], msgtype); printf("All done. Leaving hello.master.\n"); rc = pvm_exit(); }

  20. MS Architecture: Slave #include <stdio.h> #include "pvm3.h" #define HELLO_MSGTYPE 1 main() { int mytid, msgtype, rc; char helloworld[13]; mytid = pvm_mytid(); msgtype = HELLO_MSGTYPE; rc = pvm_recv(-1, msgtype); rc = pvm_upkstr(helloworld); printf(" ***Reply from spawned process: %s : \n",helloworld); rc = pvm_exit(); }

  21. PVM Hostfile • A file containing a list of hostnames that defines your parallel virtual machine • Hostnames are listed one per line • Several options are available for customization: • userid, • password, • location of pvmd, • paths to executables, • etc.

  22. XPVM: An IDE for PVM

  23. MPI • MPI: Message Passing Interface Standard • MPI is a library specification for message-passing, proposed as a standard by a broadly based committee of vendors, implementors, and users. • MPI was designed for high performance on both massively parallel machines and on workstation clusters. • As in all message-passing systems, MPI provides a means of synchronizing processes by stopping each one until they all have reached a specific “barrier” call. • MPI was developed by a broadly based committee of vendors, implementors, and users.

  24. MPI APIs • It is a standard message passing API • Specifies many variants of send/recv • 9 send interface calls • Eg., synchronous send, asynchronous send, ready send, asynchronous ready send • Plus other defined APIs • Process topologies • Group operations • Derived Data types • Implemented and optimized by machine vendors

  25. Collective Communications • The principal collective operations operating upon data are • MPI_Bcast() - Broadcast from root to all other processes • MPI_Gather() - Gather values for group of processes • MPI_Scatter() - Scatters buffer in parts to group of processes • MPI_Alltoall() - Sends data from all processes to all processes • MPI_Reduce() - Combine values on all processes to single value • MPI_Reduce_scatter() - Combine values and scatter results • MPI_Scan() - Compute prefix reductions of data on processes

  26. References • PVM: • Web sites: • www.epm.ornl.gov/pvm/pvm_home.html • A Java site: http://www.cs.virginia.edu/~ajf2j/jpvm.html • Book: PVM: Parallel Virtual MachineA Users' Guide and Tutorial for Networked Parallel Computing, Al Geist, Adam Beguelin, Jack Dongarra, Weicheng Jiang, Robert Manchek, Vaidy Sunderam (www.netlib.org/pvm3/book/pvm-book.html) • MPI: • Websites: • http://www-unix.mcs.anl.gov/mpi • A Java MPI: http://aspen.ucs.indiana.edu/pss/HPJava/mpiJava.html • Book: MPI - The Complete Reference, Volume 1, The MPI Core, 2nd edition, Snir et al, MIT Press, 1999. • Two freely available MPI libraries that you can download and install: • LAM - http://www.lam-mpi.org • MPICH - http://www-unix.mcs.anl.gov/mpi/mpich

More Related