1 / 60

Distributed Shared Memory and Sequential Consistency

Distributed Shared Memory and Sequential Consistency. Outline. Consistency Models Memory Consistency Models Distributed Shared Memory Implementing Sequential Consistency in Distributed Shared Memory. Consistency.

neena
Télécharger la présentation

Distributed Shared Memory and Sequential Consistency

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Distributed Shared Memory and Sequential Consistency

  2. Outline • Consistency Models • Memory Consistency Models • Distributed Shared Memory • Implementing Sequential Consistency in Distributed Shared Memory

  3. Consistency • There are many aspects for consistency. But remember that the consistency is the way for the people to reason about the systems. (What behavior should be considered as “correct” or “suitable”.) • Consistency model is considered to be the constrains of a system that can be observed by the outside of the system. • Consistency problems raised in many applications in distributed system including DSM(distributed shared memory), multiprocessors with shared memory(called as memory model), and replicas stored on multiple servers.

  4. Examples for consistency • Memory: • step1: write x=5; step2: read x; • step2 of read x should return 5 as the read operation is following the write operation and should reveal the write effectiveness. This is single object consistency and also called as “coherence”. • Database: Bank Transaction • (transfer 1000 from acctA to acctB) • ACT1: acctA=accA+1000; ACT2: accB=accB=1000; • acctA+acctB should be kept as the same. Any internal state should not be seen from the outside. • Replica in Distributed System • All the replicas for the same data should be the same despite the network or server problems.

  5. Consistency Challenges • No right or wrong consistency models. Often it is the art of tradeoff between ease of programmability and efficiency. • There is no consistency problem when you are using one thread to read or write data as the read will always reveal the result of the most recent write. • Thus, consistency problem raises while dealing with concurrent accessing on either single object or multiple objects. • Pay attention that this might be less obvious than you though before. • We will focus on building a distributed shared memory system.

  6. Many systems involve consistency • Many systems have storage/memory with concurrent readers and writers, all these systems will face the consistency problems. • Multiprocessors, databases, AFS, lab extent server, lab YFS • You often want to improve in ways that risk changing behavior: • add caching • split over multiple servers • replication for fault tolerance • How can we figure out that such optimizations are “correct”? • We need a way to think about correct execution of distributed programs. Most of these ideas from multiprocessors (memory models) and databases (transactions) 20/30 years ago. • The following discussion is focused on the correctness and efficiency, not fault-tolerance.

  7. Distributed Shared Memory • Multiple processes connect to the virtually shared memory. The virtual shared memory might be physcially located in distributed hosts connected by a network. • So, how to implement a distributed shared memory system?

  8. Naive Distributed Shared Memory • Each machine has a local copy of all memory (mem0, mem1, mem2 should be kept as the same) • Read: from local memory • Write: send update message to each other host (but don’t wait) • This is fast because the processes never waits for communication • Does this memory work well?

  9. Example1: • M2: • while(done1==0); • v2=f2(v0,v1); • Intuitive intent: M2 should execute f2() with results from M0 and M1, waiting for M1 implies waiting for M0. • M0: • v0=f0(); • done0=1; • M1: • while(done0==0) ; • v1=f1(v0); • done1=1;

  10. Will the naive distributed memory work for example 1? • Problem A • M0’s writes of v0 and done 0 may be interchanged by network leaving v0 unset but done0=1 • how to fix?

  11. Will naive distributed memory work for example 1? • Problem B • M2 sees M1’s writes before M0’s writes i.e. M2 and M1 disagree on order of M0 and M1 writes. • How to fix?

  12. Naive distributed memory is fast • But has unexpected behavior • Maybe it is not “correct”? • maybe we should never have expected example 1 to work. • So? • How can we write correct distributed programs with shared storage? • Memory system promises to behave according to certain rules. • We write programs assuming those rules. • Rules are a “consistency model” • This is the contract between memory system and programmer.

  13. What makes a good consistency model? • There is no “right” or “wrong” consistency models. • A model may make it harder to program but with good efficiency. • A model may make it easier to program but with bad performance. • Some consistency model may output astonishing results. • Applications might use different kinds of memory models such as Web pages or shared memory according the types of applications.

  14. Strict Consistency • Define the strict consistency? • Suppose we can tag each operation with a timestamp (global time). • Suppose each operation can complete instantaneous. • Thus: • A read returns the results of the most recently written value. • This is what uniprocessors support.

  15. Strict Consistency • This follows the strict consistency: • a=1;a=2;print a; always produce the value of a (2) • Is this strict consistency? • P0: w(x) 1 • P1: r(x)0 r(x)1 • Strict consistency is a very intuitive consistency model. • So, would strict consistency avoid problem A and B?

  16. Implementation of Strict Consistency Model • How is R@2 aware of W@1? • How does W@4 know to pause until R@3 has finished? How long to wait? • This is too hard to implement.

  17. Sequential Consistency • Sequential consistency (serializability): the results are the same as if operations from different processors are interleaved, but operations of a single processor appear in the order specified by the program • Example of sequentially consistent execution (Not strictly consistency as it violate the physical time effectiveness) : • P1: W(x)1 • P2: R(x)0 R(x)1 • Sequential consistency is inefficient: we want to weaken the model further

  18. What sequential consistency implies? • Sequential consistency defines a total order of operations: • Inside each machine, the operations (instructions) appear in-order in the total order (and defined by the program). The results will be defined by the total order. • All machines see results consistent with the total order (all machines agree the operation order that applied to the shared memory). All reads see most recent write in the total order. All machines see the same total order. • Sequential Consistency has better performance than strict consistency • System has some freedom in how to interleave different operations from different machines. • Not forced to order by operation time (as in strict consistency model) and can delay a read or write while it finds current values.

  19. Problem A and B in sequential consistency • Problem A • M0's execution order was v0= done0= • M1 saw done0= v0= • Each machine's operations must appear in execution order, so this cannot happen with sequential consistency. • Problem B • M1 saw v0= done0= done1= • M2 saw done1= v0= • This cannot occur given a single total order, so this cannot happen with sequential consistency.

  20. The performance bottleneck • Once a machine’s write completes, other machines’ reads must see the new data. • Thus communication cannot be omitted or much delayed. • Thus either reads or writes (or both) will be expensive.

  21. The implementation of sequential consistency • Using a single server. Each machines will send the read/write operations to the server and queued. • (The operations should be sent in order by the corresponding machine and should be queued in that order.) • The server picks order among waiting operations. • Server executes one by one, sending back the replies.

  22. Performance problem of the simple implementation • Single server will soon get overloaded. • No local cache! all operations will wait replies from the server. (This is severe performance killer for multicore processors) • So: • Partition memory across multiple servers to eliminate single-sever bottleneck. • Can serve many machines in parallel if they don’t use same memory • Lamport paper from 1979 shows that a system is sequential consistent if: • 1. each machine executes on operation at a time, waiting for it to complete. • 2. executes operations on each memory location at a time i.e. you can have lots of independent machines and memory systems.

  23. Distributed shared memory • If a memory location is not written, it can be replicated i.e. cache it on each machine so that reads are fast. • But we have to ensure that reads and writes are ordered • Once the write modifies the location, no read should return the old value. • Must revoke cached copies before writing. • This delays writes to improve read performance.

  24. IVY: memory coherence in shared virtual memory systems (Kai Li and Paul Hudak)

  25. IVY: Distributed Shared Memory • IVY: connect multiple desktop / server together through LAN and provide the illusion of super power machine. • A single power machine: single machine with shared memory and all CPUs are visible to the applications. • Applications can use the concepts of multi-thread programming and harnessing the power of many machines! • Applications do not need make explicit communication. (different from MPI)

  26. Page operations • IVY operates on pages of memory, stored in machine DRAM (no memory server, different from the single server implementation) • Uses VM (virtual memory) hardware to intercept reads/writes. • Let’s build the IVY system step by step.

  27. Simplified IVY • Only one copy of a page at a time (on only one machine) • All other copies marked invalid in VM tables • If M0 faults PageX (either read or write) • Fine the one copy e.g. in M1 • Invalidate PageX in M1 • Move PageX to M0 • M0 marks the page R/W in VM tables Provide sequential consistency: order of reads/writes can be set by order in which page moves. Slow: think about the applications perform many reads without any write, the mechanism require many faults and page move

  28. Multiple reads in IVY • IVY allows multiple reader copies between writes. • No need to force an order for reads that occur between two writes. • IVY put a copy of the page at each reader thus the reads can performed concurrently.

  29. IVY core strategy • Either: • multiple read-only copies and no writeable copies, • or one writeable copy, no other copies • Before write, invalidate all other copies, • Must track one writer (owner) and copies (copy_set)

  30. Why crucial to invalidate all copies before write? • Once a write completes, all subsequent reads *must* see new data. Otherwise it might be possible that different machine will see the different order. • If one could read stale data, this could occur: • M0: wv=0 wv=99 wdone=1 • M1: rv=0 rdone=1 rv=0 • But we know that can't happen with sequential consistency.

  31. IVY Implementation • Manager: the process to manage the relationship between page and its owner. Manager acts like a map to help the process to find the corresponding page. In IVY, manager can be either fixed or dynamic. • Owner: the owner of a page has the write privilege and all other processes have the read-only privilege. • copy_set: store the information about the copies for a specific page. If the page is read-only, the copy_set indicates the copies of the page together with the location of the copies. If the page is writable, the copy_set contain only one entry i.e. the owner.

  32. IVY Messages • RQ (read query, reader to MGR) • RF (read forward, MGR to owner) • RD (read data, owner to reader) • RC (read confirm, reader to MGR) • WQ (write query, writer to MGR) • IV (invalidate, MGR to copy_set) • IC (invalidate confirm, copy_set to MGR) • WF (write forward, MGR to owner) • WD (write data, owner to writer) • WC (write confirm, writer to MGR)

  33. Scenarios • scenario 1: M0 has writeable copy, M1 wants to read • 0. page fault on M1, since page must have been marked invalid • 1. M1 sends RQ to MGR • 2. MGR sends RF to M0, MGR adds M1 to copy_set • 3. M0 marks page as access=read, sends RD to M1 • 4. M1 marks access=read, sends RC to MGR • scenario 2: now M2 wants to write • 0. page fault on M2 • 1. M2 sends WQ to MGR • 2. MGR sends IV to copy_set (i.e. M1) • 3. M1 sends IC msg to MGR • 4. MGR sends WF to M0, sets owner=M2, copy_set={} • 5. M0 sends WD to M2, access=none • 6. M2 marks r/w, sends WC to MGR

  34. ptable (all CPUs) access: R, W, or nil owner: T or F • info (MGR only) copy_set: list of CPUs with read-only copies owner: CPU that can write page CPU0 CPU1 CPU2 / MGR ptable info

  35. CPU0 CPU1 read CPU2 / MGR ptable info

  36. RQ CPU0 CPU1 read CPU2 / MGR ptable info

  37. RQ CPU0 CPU1 read CPU2 / MGR ptable info

  38. RF RQ CPU0 CPU1 read CPU2 / MGR ptable info

  39. RF RQ CPU0 CPU1 read CPU2 / MGR ptable info

  40. RD RF RQ CPU0 CPU1 read CPU2 / MGR ptable info

  41. RD RF RQ RC CPU0 read CPU1 CPU2 / MGR ptable info

  42. RD RF RQ RC CPU0 read CPU1 CPU2 / MGR ptable info

  43. RD RF RQ RC CPU0 read CPU1 CPU2 / MGR ptable info

  44. RD RF RQ RC CPU0 read CPU1 CPU2 / MGR ptable info

  45. CPU0 CPU1 write CPU2 / MGR ptable info

  46. WQ CPU0 CPU1 write CPU2 / MGR ptable info

  47. IV WQ CPU0 CPU1 write CPU2 / MGR ptable info

  48. IC IV WQ CPU0 CPU1 write CPU2 / MGR ptable info

  49. WF IC IV WQ CPU0 CPU1 write CPU2 / MGR ptable info

  50. WD WF IC IV WQ CPU0 CPU1 write CPU2 / MGR ptable info

More Related