1 / 126

Midterm Review CS 230 – Distributed Systems (http://www.ics.uci.edu/~cs230)

Midterm Review CS 230 – Distributed Systems (http://www.ics.uci.edu/~cs230). Nalini Venkatasubramanian nalini@ics.uci.edu. Characterizing Distributed Systems. Multiple Autonomous Computers each consisting of CPU’s, local memory, stable storage, I/O paths connecting to the environment

berke
Télécharger la présentation

Midterm Review CS 230 – Distributed Systems (http://www.ics.uci.edu/~cs230)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Midterm Review CS 230 – Distributed Systems(http://www.ics.uci.edu/~cs230) NaliniVenkatasubramanian nalini@ics.uci.edu

  2. Characterizing Distributed Systems • Multiple Autonomous Computers • each consisting of CPU’s, local memory, stable storage, I/O paths connecting to the environment • Geographically Distributed • Interconnections • some I/O paths interconnect computers that talk to each other • Shared State • No shared memory • systems cooperate to maintain shared state • maintaining global invariants requires correct and coordinated operation of multiple computers. Distributed Systems

  3. Classifying Distributed Systems • Based on degree of synchrony • Synchronous • Asynchronous • Based on communication medium • Message Passing • Shared Memory • Fault model • Crash failures • Byzantine failures Distributed Systems

  4. Computation in distributed systems • Asynchronous system • no assumptions about process execution speeds and message delivery delays • Synchronous system • make assumptions about relative speeds of processes and delays associated with communication channels • constrains implementation of processes and communication • Models of concurrency • Communicating processes • Functions, Logical clauses • Passive Objects • Active objects, Agents Distributed Systems

  5. Communication in Distributed Systems • Provide support for entities to communicate among themselves • Centralized (traditional) OS’s - local communication support • Distributed systems - communication across machine boundaries (WAN, LAN). • 2 paradigms • Message Passing • Processes communicate by sharing messages • Distributed Shared Memory (DSM) • Communication through a virtual shared memory. Distributed Systems

  6. Fault Models in Distributed Systems • Crash failures • A processor experiences a crash failure when it ceases to operate at some point without any warning. Failure may not be detectable by other processors. • Failstop - processor fails by halting; detectable by other processors. • Byzantine failures • completely unconstrained failures • conservative, worst-case assumption for behavior of hardware and software • covers the possibility of intelligent (human) intrusion. Distributed Systems

  7. Client/Server Computing • Client/server computing allocates application processing between the client and server processes. • A typical application has three basic components: • Presentation logic • Application logic • Data management logic Distributed Systems

  8. Distributed Systems Middleware • Middleware is the software between the application programs and the operating System and base networking • Integration Fabric that knits together applications, devices, systems software, data • Middleware provides a comprehensive set of higher-level distributed computing capabilities and a set of interfaces to access the capabilities of the system. Distributed Systems

  9. Virtual Time and Global States in Distributed Systems Prof. Nalini Venkatasubramanian Distributed Systems Middleware - Lecture 2 Includes slides modified from : A. Kshemkalyani and M. Singhal (Book slides: Distributed Computing: Principles, Algorithms, and Systems

  10. Global Time & Global State of Distributed Systems • Asynchronous distributed systems consist of several processes without common memory which communicate (solely) via messageswith unpredictable transmission delays • Global time & global state are hard to realize in distributed systems • Processes are distributed geographically • Rate of event occurrence can be high (unpredictable) • Event execution times can be small • We can only approximate the global view • Simulatesynchronous distributed system on given asynchronous systems • Simulate a global time – Logical Clocks • Simulate a global state – Global Snapshots

  11. Simulating global time • An accurate notion of global time is difficult to achieve in distributed systems. • We often derive “causality” from loosely synchronized clocks • Clocks in a distributed system drift • Relative to each other • Relative to a real world clock • Determination of this real world clock itself may be an issue • Clock Skew versus Drift • Clock Skew = Relative Difference in clock values of two processes • Clock Drift = Relative Difference in clock frequencies (rates) of two processes • Clock synchronization is needed to simulate global time • Correctness – consistency, fairness • Physical Clocks vs. Logical clocks • Physical clocks - must not deviate from the real-time by more than a certain amount.

  12. Physical Clocks • How do we measure real time? • 17th century - Mechanical clocks based on astronomical measurements • Problem (1940) - Rotation of the earth varies (gets slower) • Mean solar second - average over many days • 1948 • counting transitions of a crystal (Cesium 133) used as atomic clock • TAI - International Atomic Time • 9192631779 transitions = 1 mean solar second in 1948 • UTC (Universal Coordinated Time) • From time to time, we skip a solar second to stay in phase with the sun (30+ times since 1958) • UTC is broadcast by several sources (satellites…)

  13. Cristian’s (Time Server) Algorithm • Uses a time server to synchronize clocks • Time server keeps the reference time (say UTC) • A client asks the time server for time, the server responds with its current time, and the client uses the received value T to set its clock • But network round-trip time introduces errors… • Let RTT = response-received-time – request-sent-time (measurable at client), • If we know (a) min = minimum client-server one-way transmission time and (b) that the server timestamped the message at the last possible instant before sending it back • Then, the actual time could be between [T+min,T+RTT— min]

  14. Berkeley UNIX algorithm • One daemon without UTC • Periodically, this daemon polls and asks all the machines for their time • The machines respond. • The daemon computes an average time and then broadcasts this average time.

  15. Decentralized Averaging Algorithm • Each machine has a daemon without UTC • Periodically, at fixed agreed-upon times, each machine broadcasts its local time. • Each of them calculates the average time by averaging all the received local times.

  16. Clock Synchronization in DCE • DCE’s time model is actually in an interval • I.e. time in DCE is actually an interval • Comparing 2 times may yield 3 answers • t1 < t2 • t2 < t1 • not determined • Each machine is either a time server or a clerk • Periodically a clerk contacts all the time servers on its LAN • Based on their answers, it computes a new time and gradually converges to it.

  17. Network Time Protocol (NTP) • Most widely used physical clock synchronization protocol on the Internet • 10-20 million NTP servers and clients in the Internet • Claimed Accuracy (Varies) • milliseconds on WANs, submilliseconds on LANs • Hierarchical tree of time servers. • The primary server at the root synchronizes with the UTC. • Secondary servers - backup to primary server. • Lowest • synchronization subnet with clients.

  18. Logical Time

  19. Causal Relations • Distributed application results in a set of distributed events • Induces a partial order  causal precedence relation • Knowledge of this causal precedence relation is useful in reasoning about and analyzing the properties of distributed computations • Liveness and fairness in mutual exclusion • Consistency in replicated databases • Distributed debugging, checkpointing

  20. Event Ordering • Lamport defined the “happens before” (<) relation • If a and b are events in the same process, and a occurs before b, then a<b. • If a is the event of a message being sent by one process and b is the event of the message being received by another process, then a < b. • If X <Y and Y<Z then X < Z. If a < b then time (a) < time (b)

  21. Causal Ordering • “Happens Before” also called causal ordering • Possible to draw a causality relation between 2 events if • They happen in the same process • There is a chain of messages between them • “Happens Before” notion is not straightforward in distributed systems • No guarantees of synchronized clocks • Communication latency

  22. Implementing Logical Clocks • Requires • Data structures local to every process to represent logical time and • a protocol to update the data structures to ensure the consistency condition. • Each process Pi maintains data structures that allow it the following two capabilities: • A local logical clock, denoted by LCi , that helps process Pi measure its own progress. • A logical global clock, denoted by GCi , that is a representation of process Pi ’s local view of the logical global time. Typically, LCi is a part of GCi • The protocol ensures that a process’s logical clock, and thus its view of the global time, is managed consistently. • The protocol consists of the following two rules: • R1: This rule governs how the local logical clock is updated by a process when it executes an event. • R2: This rule governs how a process updates its global logical clock to update its view of the global time and global progress.

  23. Types of Logical Clocks • Systems of logical clocks differ in their representation of logical time and also in the protocol to update the logical clocks. • 3 kinds of logical clocks • Scalar • Vector • Matrix

  24. Scalar Logical Clocks - Lamport • Proposed by Lamport in 1978 as an attempt to totally order events in a distributed system. • Time domain is the set of non-negative integers. • The logical local clock of a process Pi and its local view of the global time are squashed into one integer variable Ci . • Monotonically increasing counter • No relation with real clock • Each process keeps its own logical clock used to timestamp events

  25. Consistency with Scalar Clocks • Local clocks must obey a simple protocol: • When executing an internal event or a send event at process Pi the clock Ci ticks • Ci += d (d>0) • When Pi sends a message m, it piggybacks a logical timestamp t which equals the time of the send event • When executing a receive event at Pi where a message with timestamp t is received, the clock is advanced • Ci = max(Ci,t)+d (d>0) • Results in a partial ordering of events.

  26. Total Ordering • Extending partial order to total order • Global timestamps: • (Ta, Pa) where Ta is the local timestamp and Pa is the process id. • (Ta,Pa) < (Tb,Pb) iff • (Ta < Tb) or ( (Ta = Tb) and (Pa < Pb)) • Total order is consistent with partial order. time Proc_id

  27. Vector Times • The system of vector clocks was developed independently by Fidge, Mattern and Schmuck. • In the system of vector clocks, the time domain is represented by a set of n-dimensional non-negative integer vectors. • Each process has a clock Ci consisting of a vector of length n, where n is the total number of processes vt[1..n], where vt[j ] is the local logical clock of Pj and describes the logical time progress at process Pj . • A process Pi ticks by incrementing its own component of its clock • Ci[i] += 1 • The timestamp C(e) of an event e is the clock value after ticking • Each message gets a piggybacked timestamp consisting of the vector of the local clock • The process gets some knowledge about the other process’ time approximation • Ci=sup(Ci,t):: sup(u,v)=w : w[i]=max(u[i],v[i]), i

  28. Vector Clocks example Figure 3.2: Evolution of vector time. From A. Kshemkalyani and M. Singhal (Distributed Computing)

  29. Matrix Time • Vector time contains information about latest direct dependencies • What does Pi know about Pk • Also contains info about latest direct dependencies of those dependencies • What does Pi know about what Pk knows about Pj • Message and computation overheads are high • Powerful and useful for applications like distributed garbage collection

  30. Simulate A Global State • Recording the global state of a distributed system on-the-fly is an important paradigm. • Challenge: lack of globally shared memory, global clock and unpredictable message delays in a distributed system • Notions of global time and global state closely related • A process can (without freezing the whole computation) compute the bestpossibleapproximation of global state • A global state that could have occurred • No process in the system can decide whether the state did really occur • Guarantee stable properties (i.e. once they become true, they remain true)

  31. Consistent Cuts • A cut (or time slice) is a zigzag line cutting a time diagram into 2 parts (past and future) • E is augmented with a cut event ci for each process Pi:E’ =E {ci,…,cn}  • A cut C of an event set E is a finite subset CE: eC  e’<le e’C • A cut C1 is later than C2 if C1C2 • A consistent cut C of an event set E is a finite subset CE : eC  e’<e e’ C • i.e. a cut is consistent if every message received was previously sent (but not necessarily vice versa!)

  32. Cuts (Summary) Instant of local observation Time P1 5 8 3 initial value P2 5 2 3 7 4 1 P3 5 4 0 ideal (vertical) cut (15) consistent cut (15) inconsistent cut (19) not attainable equivalent to a vertical cut (rubber band transformation) can’t be made vertical (message from the future) “Rubber band transformation” changes metric, but keeps topology

  33. System Model for Global Snapshots • The system consists of a collection of n processes p1, p2, ..., pn that are connected by channels. • There are no globally shared memory and physical global clock and processes communicate by passing messages through communication channels. • Cij denotes the channel from process pi to process pj and its state is denoted by SCij . • The actions performed by a process are modeled as three types of events: • Internal events,the message send event and the message receive event. • For a message mij that is sent by process pi to process pj , let send(mij ) and rec(mij ) denote its send and receive events.

  34. Process States and Messages in transit • At any instant, the state of process pi , denoted by LSi , is a result of the sequence of all the events executed by pi till that instant. • For an event e and a process state LSi , e∈LSi iff e belongs to the sequence of events that have taken process pi to state LSi . • For an event e and a process state LSi , e (not in) LSi iff e does not belong to the sequence of events that have taken process pi to state LSi . • For a channel Cij , the following set of messages can be defined based on the local states of the processes pi and pj Transit: transit(LSi , LSj ) = {mij |send(mij ) ∈ LSi V rec(mij ) (not in) LSj }

  35. Global States of Consistent Cuts • The global state of a distributed system is a collection of the local states of the processes and the channels. • A global state computed along a consistent cut is correct • The global state of a consistent cut comprises the local state of each process at the time the cut event happens and the set of all messages sent but not yet received • The snapshot problem consists in designing an efficient protocol which yields only consistent cuts and to collect the local state information • Messages crossing the cut must be captured • Chandy & Lamport presented an algorithm assuming that message transmission is FIFO

  36. Chandy-Lamport Distributed Snapshot Algorithm • Assumes FIFO communication in channels • Uses a control message, called a marker to separate messages in the channels. • After a site has recorded its snapshot, it sends a marker, along all of its outgoing channels before sending out any more messages. • The marker separates the messages in the channel into those to be included in the snapshot from those not to be recorded in the snapshot. • A process must record its snapshot no later than when it receives a marker on any of its incoming channels. • The algorithm terminates after each process has received a marker on all of its incoming channels. • All the local snapshots get disseminated to all other processes and all the processes can determine the global state.

  37. Chandy-Lamport Distributed Snapshot Algorithm Marker receiving rule for Process Pi If (Pi has not yet recorded its state) it records its process state now records the state of c as the empty set turns on recording of messages arriving over other channels else Pi records the state of c as the set of messages received over c since it saved its state Marker sending rule for Process Pi After Pi has recorded its state,for each outgoing channel c: Pi sends one marker message over c (before it sends any other message over c)

  38. Chandy-Lamport Extensions: Spezialetti-Kerns and others • Exploit concurrently initiated snapshots to reduce overhead of local snapshot exchange • Snapshot Recording • Markers carry identifier of initiator – first initiator recorded in a per process “master” variable. • Region - all the processes whose master field has same initiator. • Identifiers of concurrent initiators recorded in “id-border-set.” • Snapshot Dissemination • Forest of spanning trees is implicitly created in the system. Every Initiator is root of a spanning tree; nodes relay snapshots of rooted subtree to parent in spanning tree • Each initiator assembles snapshot for processes in its region and exchanges with initiators in adjacent regions. • Others: multiple repeated snapshots; wave algorithm

  39. Computing Global States without FIFO Assumption • In a non-FIFO system, a marker cannot be used to delineate messages into those to be recorded in the global state from those not to be recorded in the global state. • In a non-FIFO system, either some degree of inhibition or piggybacking of control information on computation messages to capture out-of-sequence messages.

  40. Non-FIFO Channel Assumption: Lai-Yang Algorithm • Emulates marker by using a coloring scheme • Every Process: White (before snapshot); Red (after snapshot). • Every message sent by a white (red) process is colored white (red) indicating if it was sent before(after) snapshot. • Each process (which is initially white) becomes red as soon as it receives a red message for the first time and starts a virtual broadcast algorithm to ensure that all processes will eventually become red • Get Dummy red messages to all processes (Flood neighbors) • Determining Messages in transit • White process records history of white msgs sent/received on each channel. • When a process turns red, it sends these histories along with its snapshot to the initiator process that collects the global snapshot. • Initiator process evaluates transit(LSi , LSj ) to compute state of a channel Cij : • SCij = white messages sent by pi on Cij − white messages received by pj on Cij = \ {send(mij )|send(mij ) ∈ LSi } − {rec(mij )|rec(mij ) ∈ LSj }.

  41. Non-FIFO Channel Assumption: Termination Detection • Required to detect that no white messages are in transit. • Method 1: Deficiency Counting • Each process Pi keeps a counter cntri that indicates the difference between the number of white messages it has sent and received before recording its snapshot. • It reports this value to the initiator process along with its snapshot and forwards all white messages, it receives henceforth, to the initiator. • Snapshot collection terminates when the initiator has received Σi cntri number of forwarded white messages. • Method 2 • Each red message sent by a process carries a piggybacked value of the number of white messages sent on that channel before the local state recording. • Each process keeps a counter for the number of white messages received on each channel. • A process can detect termination of recording the states of incoming channels when it receives as many white messages on each channel as the value piggybacked on red messages received on that channel.

  42. Non-FIFO Channel Assumption: Mattern Algorithm • Uses Vector Clocks and assumes a single initiator • All process agree on some future virtualtimes or a set of virtual time instantss1,…sn which are mutually concurrent and did notyet occur • A process takes its local snapshot at virtual times • After time s the local snapshots are collected to construct a global snapshot • Pi ticks and then fixes its next time s=Ci +(0,…,0,1,0,…,0) to be the common snapshot time • Pi broadcasts s • Pi blocks waiting for all the acknowledgements • Pi ticks again (setting Ci=s), takes its snapshot and broadcast a dummy message (i.e. force everybody else to advance their clocks to a value  s) • Each process takes its snapshot and sends it to Pi when its local clock becomes  s

  43. Non-FIFO Channel Assumption: Mattern Algorithm • Inventing a n+1 virtual process whose clock is managed by Pi • Pi can use its clock and because the virtual clock Cn+1 ticks only when Pi initiates a new run of snapshot : • The first n component of the vector can be omitted • The first broadcast phase is unnecessary • Counter modulo 2

  44. Distributed Operating Systems - Introduction Prof. Nalini Venkatasubramanian (includes slides from Prof. Petru Eles and Profs. textbook slides by Kshemkalyani/Singhal)

  45. What does an OS do? • Process/Thread Management • Scheduling • Communication • Synchronization • Memory Management • Storage Management • FileSystems Management • Protection and Security • Networking

  46. Operating System Types • Multiprocessor OS • Looks like a virtual uniprocessor, contains only one copy of the OS, communicates via shared memory, single run queue • Network OS • Does not look like a virtual uniprocessor, contains n copies of the OS, communicates via shared files, n run queues • Distributed OS • Looks like a virtual uniprocessor (more or less), contains n copies of the OS, communicates via messages, n run queues

  47. Design Elements • Communication • Two basic IPC paradigms used in DOS • Message Passing (RPC) and Shared Memory • synchronous, asynchronous • Process Management • Process synchronization • Coordination of distributed processes is inevitable • mutual exclusion, deadlocks, leader election • Task Partitioning, allocation, load balancing, migration • FileSystems • Naming of files/directories • File sharing semantics • Caching/update/replication

  48. Remote Procedure Call A convenient way to construct a client-server connection without explicitly writing send/ receive type programs (helps maintain transparency).

  49. Remote Procedure Call (cont.) • Client procedure calls the client stub in a normal way • Client stub builds a message and traps to the kernel • Kernel sends the message to remote kernel • Remote kernel gives the message to server stub • Server stub unpacks parameters and calls the server • Server computes results and returns it to server stub • Server stub packs results in a message and traps to kernel • Remote kernel sends message to client kernel • Client kernel gives message to client stub • Client stub unpacks results and returns to client

  50. Distributed Shared Memory • Provides a shared-memory abstraction in the loosely coupled distributed-memory processors. • Issues • Granularity of the block size • Synchronization • Memory Coherence (Consistency models) • Data Location and Access • Replacement Strategies • Thrashing • Heterogeneity

More Related