1 / 18

Parallel and Distributed Processing CSE 8380

Parallel and Distributed Processing CSE 8380. February 10, 2005 Session 9. Contents. Message Passing Model Complexity Analysis Summation Leader Election. Introduction.

Télécharger la présentation

Parallel and Distributed Processing CSE 8380

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Parallel and Distributed ProcessingCSE 8380 February 10, 2005 Session 9

  2. Contents • Message Passing Model • Complexity Analysis • Summation • Leader Election

  3. Introduction • Recent migrations to distributed systems have increased he need for a better understanding of message passing computational model and algorithms • A processing unit in such systems is an autonomous computer, which may be engaged in its own private computation while at the same time cooperating with other units in the context of some computational task.

  4. Message Passing Computing Models • Algorithm – collection of local programs running concurrently on different processing units. Each program performs a sequence of computation and message passing operations • Message passing can be represented using a communication graph • Processor vs. Process

  5. Degree of Synchrony • Synchronous – Computation and communication are dome in a lockstep manner. (rounds of send, receive, compute) • Asynchronous – processing units take steps at arbitrary speeds and communication delay is unpredictable • Partially synchronous – restrictions on the relative timing events

  6. Synchronous Model • n processors & communication graph G(V,E) • Modeled as a state machine • System is initialized and set to an arbitrary initial state • For each process i in V repeat in synchronized rounds • Send messages to outgoing neighbors by applying some message generation function to current state • Obtain the new state by applying a state transition function to the current state and the received messages

  7. Asynchronous Model • n processors & communication graph G(V,E) • Communication does not happen in synchronized rounds • Messages incur an unbounded and unpredictable delay • I/O automata (simple type of state machine in which transitions are associated with actions) is used to model asynchronous systems

  8. Synchronous model as a state machine • M, a fixed message alphabet • A process i can be modeled as • Qi, a (possibly infinite) set of states. The system state can be represented using a set of variables. • q0,i, the initial state in the state set Qi. The state variables have initial values in the initial state. • GenMsgi, a message generation function. It is applied to the current system state to generate messages to the outgoing neighbors from elements in M. • Transi, a state transition function that maps the current state and the incoming messages into a new state.

  9. Algorithm Template Algorithm S_Template Qi <state variables used by process i> q0,i <state variables> ← <initial values> GenMsgi <Send one message to each of a (possibly empty) subset of outgoing neighbors> Transi <update the state variables based on the incoming messages>

  10. Complexity Analysis • Message Complexity Number of messages sent between neighbors during the execution of the algorithm in the worst case • Time Complexity Synchronous – Number of rounds

  11. Summation on a hypercube Algorithm S_Sum_Hypercube Qi buff, an integer dim, a value in {1, 2, ..., log n} q0,i buff ← xi dim ← log n GenMsgi If the current value of dim = 0, do nothing. Otherwise, send the current value of buff to the neighbor along the dimension dim. Transi if the incoming message is v & dim > 0, then buff ← buff + v, dim ← dim - 1

  12. Group Work Work the following example with your neighbor

  13. Leader Election Problem • A leader among n processors is the one that is recognized by all other processors as distinguished to perform a special task • The problem occur when the processors must choose one of them as a leader. • Each processor should eventually decide whether or not it is a leader (each processor is only aware of its identification) • Most important when coordination among processors becomes necessary to recover from a failure or topological change.

  14. A solution • Given a communication graph G = (V,E) • Two steps • Each node in the graph would broadcast its unique identifier to all other nodes • After receiving the identifier of all nodes, the node with the highest identifier declares itself as the leader • Complexity • Time • Message

  15. Leader election in synchronous rings • By Chang and Roberts • Assumptions • Communication is unidirectional (clockwise) • The size of the ring is not known • The ID of each node is unique

  16. Leader election in synchronous rings Algorithm • Each process sends its ID to its outgoing neighbor • When a process receives an ID from an incoming neighbor, then: • The process sends null to its outgoing neighbor, if the received ID is less than its own ID • The process sends the received ID to its outgoing neighbor, if the received ID is greater than its own ID • The process declares itself as a leader, if the received identifier is equal to its own identifier

  17. Complexity Analysis • Given n processors connected via a ring • Time Complexity – O(n) • Message Complexity – O(n2) Why?

  18. Group Work 1- Work the following example with your neighbor 2- Study the improved leader election algorithm

More Related