1 / 43

DISTRIBUTED ALGORITHMS

Several sets of slides by Prof. Jennifer Welch will be used in this course. The slides are mostly identical to her slides, with some minor changes. DISTRIBUTED ALGORITHMS. Spring 2014 Prof. Jennifer Welch. Distributed Systems. Distributed systems have become ubiquitous: share resources

ssarah
Télécharger la présentation

DISTRIBUTED ALGORITHMS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Several sets of slides by Prof. Jennifer Welch will be used in this course. The slides are mostly identical to her slides, with some minor changes. Set 1: Introduction

  2. Set 1: Introduction DISTRIBUTED ALGORITHMS Spring 2014 Prof. Jennifer Welch

  3. Distributed Systems • Distributed systems have become ubiquitous: • share resources • communicate • increase performance • speed • fault tolerance • Characterized by • independent activities (concurrency) • loosely coupled parallelism (heterogeneity) • inherent uncertainty Set 1: Introduction

  4. Uncertainty in Distributed Systems • Uncertainty comes from • differing processor speeds • varying communication delays • (partial) failures • multiple input streams and interactive behavior Set 1: Introduction

  5. Reasoning about Distributed Systems • Uncertainty makes it hard to be confident that system is correct • To address this difficulty: • identify and abstract fundamental problems • state problems precisely • design algorithms to solve problems • prove correctness of algorithms • analyze complexity of algorithms (e.g., time, space, messages) • prove impossibility results and lower bounds Set 1: Introduction

  6. Potential Payoff of Theoretical Paradigm • careful specifications clarify intent • increased confidence in correctness • if abstracted well then results are relevant in multiple situations • indicate inherent limitations • cf. NP-completeness Set 1: Introduction

  7. Application Areas • These areas have provided classic problems in distributed/concurrent computing: • operating systems • (distributed) database systems • software fault-tolerance • communication networks • multiprocessor architectures • Newer application areas: • cloud computing • mobile computing, … Set 1: Introduction

  8. Course Overview: Part I (Fundamentals) • Introduce two basic communication models: • message passing • shared memory • and two basic timing models: • synchronous • asynchronous Set 1: Introduction

  9. Course Overview: Basic Models Message passing Shared memory synchronous Yes No asynchronous Yes Yes (Synchronous shared memory model is PRAM) Set 1: Introduction

  10. Course Overview: Part I • Covers the canonical problems and issues: • graph algorithms (Ch 2) • leader election (Ch 3) • mutual exclusion (Ch 4) • fault-tolerant consensus (Ch 5) • causality and time (Ch 6) Set 1: Introduction

  11. Course Overview: Part II (Simulations) • Here "simulations" means abstractions, or techniques for making it easier to program, by making one model appear to be an easier model. For example: • broadcast and multicast (Ch 8) • distributed shared memory (Ch 9) • stronger kinds of shared variables (Ch 10) • more synchrony (Chs 11, 13) • more benign faults (Ch 12) Set 1: Introduction

  12. Course Overview: Part II • For each of the techniques: • describe algorithms for implementing it • analyze the cost of these algorithms • explore limitations • mention applications that use the techniques Set 1: Introduction

  13. Course Overview: Part III (Advanced Topics) • Push further in some directions already introduced: • randomized algorithms (Ch 14) • stronger kinds of shared objects of arbitrary type (Ch 15) • what kinds of problems are solvable in asynchronous systems (Ch 16) • failure detectors (Ch 17) • self-stabilization Set 1: Introduction

  14. Course overview: Part IV (Other topics) • Debugging of parallel programs • Distributed function computation • Distributed algorithms for wireless networks Set 1: Introduction

  15. Relationship of Theory to Practice • time-shared operating systems: issues relating to (virtual) concurrency of processes such as • mutual exclusion • deadlock also arise in distributed systems • MIMD multiprocessors: • no common clock => asynchronous model • common clock => synchronous model • loosely coupled networks, such as Internet, => asynchronous model Set 1: Introduction

  16. Relationship of Theory to Practice • Failure models: • crash: faulty processor just stops. Idealization of reality. • Byzantine (arbitrary): conservative assumption, fits when failure model is unknown or malicious • self-stabilization: algorithm automatically recovers from transient corruption of state; appropriate for long-running applications Set 1: Introduction

  17. Message-Passing Model • processors are p0, p1, …, pn-1 (nodes of graph) • bidirectional point-to-point channels (undirected edges of graph) • each processor labels its incident channels 1, 2, 3,…; might not know who is at other end Set 1: Introduction

  18. Message-Passing Model 1 1 p3 p0 3 2 2 2 p2 p1 1 1 Set 1: Introduction

  19. Modeling Processors and Channels • Processor is a state machine including • local state of the processor • mechanisms for modeling channels • Channel directed from processor pi to processor pj is modeled in two pieces: • outbuf variable of pi and • inbuf variable of pj • Outbuf corresponds to physical channel, inbuf to incoming message queue Set 1: Introduction

  20. inbuf[1] outbuf[2] p1's local variables p2's local variables outbuf[1] inbuf[2] Modeling Processors and Channels Pink area (local vars + inbuf) is accessible state for a processor. Set 1: Introduction

  21. Configuration • Vector of processor states (including outbufs, i.e., channels), one per processor, is a configuration of the system • Captures current snapshot of entire system: accessible processor states (local vars + incoming msg queues) as well as communication channels. Set 1: Introduction

  22. p1 m3 m2 m1 p2 Deliver Event • Moves a message from sender's outbuf to receiver's inbuf; message will be available next time receiver takes a step p1 m3 m2 m1 p2 Set 1: Introduction

  23. Computation Event • Occurs at one processor • Start with old accessible state (local vars + incoming messages) • Apply transition function of processor's state machine; handles all incoming messages • End with new accessible state with empty inbufs, and new outgoing messages Set 1: Introduction

  24. Computation Event b a old local state new local state c d e pink indicates accessible state: local vars and incoming msgs white indicates outgoing msg buffers Set 1: Introduction

  25. Execution • Format is config, event, config, event, config, … • in first config: each processor is in initial state and all inbufs are empty • for each consecutive (config, event, config), new config is same as old config except: • if delivery event: specified msg is transferred from sender's outbuf to receiver's inbuf • if computation event: specified processor's state (including outbufs) change according to transition function Set 1: Introduction

  26. Admissibility • Definition of execution gives some basic "syntactic" conditions. • usually safety conditions (true in every finite prefix) • Sometimes we want to impose additional constraints • usually liveness conditions (eventually something happens) • Executions satisfying the additional constraints are admissible. These are the executions that must solve the problem of interest. • Definition of “admissible” can change from context to context, depending on details of what we are modeling Set 1: Introduction

  27. Asynchronous Executions • An execution is admissible for the asynchronous model if • every message in an outbuf is eventually delivered • every processor takes an infinite number of steps • No constraints on when these events take place: arbitrary message delays and relative processor speeds are not ruled out • Models reliable system (no message is lost and no processor stops working) Set 1: Introduction

  28. Example: Flooding • Describe a simple flooding algorithm as a collection of interacting state machines. • Each processor's local state consists of variable color, either red or green • Initially: • p0: color = green, all outbufs contain M • others: color = red, all outbufs empty • Transition: If M is in an inbuf and color = red, then change color to green and send M on all outbufs Set 1: Introduction

  29. p0 p0 M M M M p2 p1 deliver event at p1from p0 p2 p1 computation event by p1 p0 p0 M M M M computation event by p2 deliver event at p2from p1 p2 p1 p2 p1 M M Example: Flooding Set 1: Introduction

  30. p0 p0 M M M M M M p2 p1 deliver event at p1from p2 p2 p1 computation event by p1 M M p0 p0 M M M M M M etc. to deliver rest of msgs p2 p1 deliver event at p0from p1 p2 p1 Example: Flooding (cont'd) Set 1: Introduction

  31. Nondeterminism • The previous execution is not the only admissible execution of the Flooding algorithm on that triangle. • There are several, depending on the order in which messages are delivered. • For instance, the message from p0 could arrive at p2 before the message from p1 does. Set 1: Introduction

  32. Termination • For technical reasons, admissible executions are defined as infinite. • But often algorithms terminate. • To model algorithm termination, identify terminated states of processors: states which, once entered, are never left • Execution has terminated when all processors are terminated and no messages are in transit (in inbufs or outbufs) Set 1: Introduction

  33. Termination of Flooding Algorithm • Define terminated processor states as those in which color = green. Set 1: Introduction

  34. Message Complexity Measure • Message complexity: maximum number of messages sent in any admissible execution • This is a worst-case measure. • Later we will mention average-case measures. Set 1: Introduction

  35. Message Complexity of Flooding Algorithm • Message complexity: one message is sent over each edge in each direction. So number is 2m, where m = number of edges. Set 1: Introduction

  36. Time Complexity Measure • How can we measure time in asynchronous executions? • Produce a timed execution from an execution by assigning non-decreasing real times to events such that time between sending and receiving any message is at most 1. • Essentially normalizes the greatest message delay in an execution to be one time unit; still allows arbitrary interleavings of events. • Time complexity: maximum time until termination in any timed admissible execution. Set 1: Introduction

  37. Time Complexity of Flooding Algorithm • Recall that terminated processor states are those in which color = green. • Time complexity: diameter + 1 time units. (A node turns green once a "chain" of messages has reached it from p0.) • Diameter of a graph is the maximum, over all nodes v and w in the graph of the shortest path from v to w Set 1: Introduction

  38. Synchronous Message PassingSystems • An execution is admissible for the synchronous model if it is an infinite sequence of "rounds" • What is a "round"? • It is a sequence of deliver events that move all messages in transit into inbuf's, followed by a sequence of computation events, one for each processor. Set 1: Introduction

  39. Synchronous Message Passing Systems • The new definition of admissible captures lockstep unison feature of synchronous model. • This definition also implies • every message sent is delivered • every processor takes an infinite number of steps. • Time is measured as number of rounds until termination. Set 1: Introduction

  40. Example of Synchronous Model • Suppose flooding algorithm is executed in synchronous model on the triangle. • Round 1: • deliver M to p1 from p0 • deliver M to p2 from p0 • p0 does nothing (as it has no incoming messages) • p1 receives M, turns green and sends M to p0 and p1 • p2 receives M, turns green and sends M to p0 and p1 Set 1: Introduction

  41. Example of Synchronous Model • Round 2: • deliver M to p0 from p1 • deliver M to p0 from p2 • deliver M to p1 from p2 • deliver M to p2 from p1 • p0 does nothing since its color variable is already green • p1 does nothing since its color variable is already green • p2 does nothing since its color variable is already green Set 1: Introduction

  42. p0 p0 M M p2 p1 round 1 events p2 p1 round 2 events p0 M M p2 p1 M M Example of Synchronous Model Set 1: Introduction

  43. Complexity of Synchronous Flooding Algorithm • Just consider executions that are admissible w.r.t. synchronous model (i.e., that satisfy the definition of synchronous model) • Time complexity is diameter + 1 • Message complexity is 2m • Same as for asynchronous case. • Not true for all algorithms though… Set 1: Introduction

More Related