110 likes | 320 Vues
Parallel Programming. Dr Andy Evans. Parallel programming. Various options, but a popular one is the Message Passing Interface (MPI). This is a standard for talking between nodes implemented in a variety of languages.
E N D
Parallel Programming Dr Andy Evans
Parallel programming Various options, but a popular one is the Message Passing Interface (MPI). This is a standard for talking between nodes implemented in a variety of languages. With shared memory systems, we could just write to that, but enacting events around continually checking memory isn’t very efficient. Message passing better. API description formulated by the Java Grande forum. A good implementation is MPJ Express: http://mpj-express.org Language implementation and runtime/manager.
Other implementations mpiJava: http://www.hpjava.org/mpiJava.html P2P-MPI: http://grid.u-strasbg.fr/p2pmpi/ (well set up for Peer-to-Peer development) Some (like mpiJava) require an underlying C implementation to wrap around, like LAM: http://www.lam-mpi.org
MPJ Express Allows you to use their MPI library to run MPI code. Sorts out communication as well: Runs in Multicore Configuration: i.e. on one PC. Runs each process as a thread, and distributes them around available cores. Great for developing/testing. Also in Cluster Configuration: i.e. on multiple PCs.
How to check processor/core numbers My Computer → Properties Right-click taskbar → Start Task Manager (→ Resource Monitor in Win 8) With Java: Runtime.getRuntime().availableProcessors();
General outline You write the same code for all nodes. However, the behaviour changes depending on the node number. You can also open sockets to other nodes and send them stuff if they are listening. if (node == 0) { listen(); } else { sendData(); } Usually the MPI environment will organise running the code on the other nodes if you tell it to run the code and how many nodes you want.
MPI basics API definition for communicating between Nodes. MPI.Init(args) Call the initiation code MPI.Finalize() with a String[] / Shut down. MPI.COMM_WORLD.Size() Get the number of available nodes. MPI.COMM_WORLD.Rank() Get the node the code is running on Usually within try-catch: } catch (MPIException mpiE) { mpiE.printStackTrace();}
Load balancing This kind of thing is common: intnodeNumberOfAgents = 0; if (node != 0) { nodeNumberOfAgents = numberOfAgents /(numberOfNodes - 1); if (node == (numberOfNodes – 1)) {nodeNumberOfAgents =nodeNumberOfAgents + (numberOfAgents % (numberOfNodes - 1)); } agents = new Agent[nodeNumberOfAgents]; for (int i = 0; i < nodeNumberOfAgents; i++) { agents[i] = new Agent(); } }
Sending stuff MPI.COMM_WORLD.Send (java.lang.Object,startIndex,lengthToSend, dataType,nodeToSendTo,messageIntId); All sent objects must be 1D arrays, even if only one thing in them. dataType: Array of booleans: MPI.BOOLEAN Array of doubles: MPI.DOUBLE Array of ints: MPI.INT Array of nulls: MPI.NULL Array of objects: MPI.OBJECT Objects must implement java.io.Serializable
Receiving stuff MPI.COMM_WORLD.Recv (java.lang.Object,startIndex,lengthToGet, dataType,nodeSending,messageIntId); Object is a 1D array that gets the data put into it. Might, for example, be in a loop that increments nodeSending, to recv from all nodes.
Other MPI commands Any implementation of the API should have the same methods etc. For MPJ Express, see: http://mpj-express.org/docs/javadocs/index.html