1 / 27

Time, Clocks, and the Ordering of Events in a Distributed System

Time, Clocks, and the Ordering of Events in a Distributed System. Leslie Lamport Massachusetts Computer Associates, Inc. Presented by Lokendra. Problem!. Q. WHICH REQUEST WAS MADE FIRST?. A common ‘Resource’. request B. request A. Node B. Node A. A Distributed System. Problem!.

terie
Télécharger la présentation

Time, Clocks, and the Ordering of Events in a Distributed System

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Time, Clocks, and the Ordering of Events in a Distributed System Leslie Lamport Massachusetts Computer Associates, Inc. Presented by Lokendra

  2. Problem! Q. WHICH REQUEST WAS MADE FIRST? A common ‘Resource’ request B request A Node B Node A A Distributed System

  3. Problem! Q. WHICH REQUEST WAS MADE FIRST? A common ‘Resource’ Solution request B request A A Global Clock ?? Global Synchronization? Node B Node A A Distributed System

  4. Problem! Q. WHICH REQUEST WAS MADE FIRST? A common ‘Resource’ Solution Individual Clocks? Are individual clocks accurate, precise? request B request A One clock might run faster/slower? 10:02 AM 10:00 AM Node B Node A A Distributed System

  5. Problem! • Synchronization in a Distributed System! • Event Ordering : Which event occurred first? • How to sync the clocks across the nodes? • Can we define notion of ‘happened-before’ without using physical clocks?

  6. Approach towards solution • Develop a logical model of event ordering(without the use of physical clocks) • Might result into Anomalous Behavior? • Bring in ‘Physical’ Clocks to integrate with the logical model • Fix anomalous behavior • Make sure that the ‘Physical’ Clocks are synced.

  7. Partial Ordering • The system is composed of a collection of processes • Each process consists of a sequence of events (instructions/subprogram) Process P : instr1 instr2 instr3 … (Total Order) • ‘Sending’ and ‘Receiving’ messages among processes • ‘Send’ : an event • ‘Receive’ : an event

  8. ‘Happened Before’ Relation • a ‘Happened Before’ b : a→b • If a and b are events in the same process, and acomes before b, then a→ b. • If a :message sent b : receipt of the same message then a→ b. • If a→ b and b→ c then a→ c. • Two distinct events a and b are said to be concurrent if a -/->b and b -/->a

  9. Space Time Diagram p1 → p2 p1 → q2 p1 → r4 q2 -/-> p2 q3 -/-> p2

  10. Logical Clocks • A clock Ci for each process Pito be a function which assigns a number Ci(a) to any event a in that process. • Logical Clock (Ci) has no relation with the Physical Clock. • Clock Condition: For any event a, b: If a→b, then C(a) < C(b) • Converse NOT True! • Concurrent processes? • Clock Condition is satisfied if : C1. If a and b are events in process Pi, and a comes before b, then Ci(a) < Ci(b). C2. If a is the sending of a message by process Pi and b is the receipt of that message by process Pj, then Ci(a) < Cj(b)

  11. Space Time Diagram • A clock ticks through every event • The clicks happen between the events

  12. Logical Clocks • Following implementation rules are proposed to satisfy the clock condition: IR1. Each process Piincrements Ci between any two successive events. IR2. (a) If event a is the sending of a message m by process Pi, then the message m contains a timestamp Tm = Ci(a). (b) Upon receiving a message m, process Pisets Cigreater than or equal to its present value and greater than Tm.

  13. Logical Clocks Correction of clocks

  14. Total Ordering of events One more relation < to break ties, arbitrary total order of events The total order relation => is defined as: a => b, if and only if (i) Ci(a) < Cj(b) (ii) Ci(a) = Cj(b) and Pi< Pj

  15. Mutual Exclusion Problem • A process using the resource must release it before it can be given to another process. • Requests for the resource must be granted in the order in which they were made. • Does not indicate what should be done when two processes request for the resource at the same time? • If every process using the resource eventually releases it, then every request is eventually granted (no starvation)

  16. Solution -Distributed Algorithm • Each process maintains its own request queue: • The Algorithm: • Resource request: • Process Pi sends the message Tm : Pi requests resource to every other • Pi also puts the request message on its request queue. • Resource request receipt.: • Pireceives Pi’s request message. • Pj then puts the message on its request queue • Pj sends an acknowledgement to Pi (timestamped later) • Resource release: • Pi removes request message Tm : Pi requests resource from its queue • sends the release message Pi releases resource to every other process. • Resource release receipt • Pjreceive’s Pi’s resource release message. • Pj removes the Tm : Pi requests resource from its request queue. • Resource allocation. Pi is allocated the resource when: • There is a Tm : Pi requests resource message in Pi ’s request queue which is ordered before and other request in the queue by ). • Pi has received messages from every other process timestamped later than Tm .

  17. Solution -Distributed Algorithm • NOTE: • Each process independently follows these rules • There is no central synchronizing process or central storage. • Can be viewed as State Machine C x S→ S • C : set of commands for eg resource request • S : Machine’s state

  18. Anomalous Behavior

  19. Anomalous Behavior (contd) • Two possible ways to avoid such anomalous behavior: • Give the user the responsibility for avoiding anomalous behavior. • When the call is made, b be given a later timestamp than a. • Let S : set of all system events Š : Set containing S + relevant external events Let → denote the “happened before” relation for Š . Strong Clock Condition: For any events a; b in Š : if a → b then C(a) < C(b) • One can construct physical clocks, running quite independently, and having the Strong Clock Condition, therefore eliminating anomalous behavior.

  20. Physical Clocks • PC1: | dCi(t)/dt - 1 | < ĸ ĸ <<1 • PC2: For all i,j : |Ci(t) – Cj(t)| < ԑ • 2 clocks don’t run at the same rate: • They may drift further • Need of an algorithm that PC2 always holds

  21. Physical Clocks • Let μ be a number such that a→b and a occurs at t in Pi and b in Pj, then b occurs later than t+ μ • μ : less than the shortest transmission time • To avoid anomalous behavior, we need: Ci(t+ μ) – Cj(t) > 0, guaranteed if ԑ/(1- ĸ) ≤μ

  22. Algorithm μm is the minimum delay of a message and is known by the processes • IR1’ : For each i, Ci is differentiable at t and dCi(t)/dt >0 • IR2’: • a) Pi sends a message at t, timestamp Tm=Ci(t) • Upon receipt of m, Pj sets Cj(t’) to max (Cj(t’), Tm+ μm )

  23. Some related concepts • Lamport clocks limitation: • If (ab) then C(a) < C(b) but • Reverse is not true!! • Nothing can be said about events by comparing time-stamps! • If C(A) < C(B), then ?? • Need to maintain causality • Causal delivery: • If send(m) -> send(n) => deliver(m) -> deliver(n) • Need a time-stamping mechanism such that: • If T(A) < T(B) then A should have causally preceded B

  24. Vector Clocks (2, 0, 0) (3, 5, 2) (1, 0 , 0) P1 e11 e12 e13 (0, 1, 0) (2, 2, 0) (2, 3, 1) (2,4,2) (2, 5, 2) P2 e21 e22 e24 e25 e23 (0, 0, 2) (0, 0, 1) P3 e32 e31 • Each process i maintains a vector Vi • Vi[i] : number of events that have occurred at i • Vi[j] : number of events i knows have occurred at process j Less than or equal: ts(a)≤ts(b) if ts(a)[i] is ≤ ts(b)[i] for all i. For eg [ (3,3,5) ≤ (3,4,5) ] ts(e11) = (1, 0, 0) and ts(e22) = (2, 2, 0), which shows e11  e22 • Causal Delivery : • if m is sent by P1 and ts(m) is (3, 4, 0) and you are P3, you should already have received exactly 2 messages from P1 and at least 4 from P2

  25. Distributed Failure Detection Algorithm

  26. Conclusion • ‘Happens Before’ is just partial ordering • To make it total, arbitrary ties method can be used • But can lead to Anomalous behavior • But we can use synchronized clocks to resolve

  27. QUESTIONS ?

More Related