1 / 24

CIS 720

CIS 720. Distributed Shared Memory. Shared Memory. Shared memory programs are easier to write Multiprocessor systems Message passing systems: - no physically shared memory - need to provide an abstraction of shared memory: Distributed Shared Memory. Shared Memory.

lilka
Télécharger la présentation

CIS 720

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CIS 720 Distributed Shared Memory

  2. Shared Memory • Shared memory programs are easier to write • Multiprocessor systems • Message passing systems: - no physically shared memory - need to provide an abstraction of shared memory: Distributed Shared Memory

  3. Shared Memory

  4. Single copy of each variable at a fixed location • Multiple copies

  5. Consistency Models • w(x)v: write value v into x • r(x)v: read of x return value v • Uniprocess programs: - all operations are totally ordered - read operations return the value written by the most recent write operation w(y)2 w(x)2 r(y)2 w(x)1 r(x)

  6. Migratory protocol • Each page (variable) has a single copy • Initially, pages are distributed among the processes. • To read/write a variable: If page is locally available, perform the operation; Otherwise, DSM layer sends request for the page to be moved locally.

  7. Migratory protocol can suffer from trashing. • Solution: Maintain multiple copies

  8. Consistency model • In the presence of multiple copies, we need to look at values written by other processes

  9. Program order x must be 3 Execution History: …….,x = 5; x = 3;w = x;y = z;y = 4;z = 3; ……. Definitions x = 3 w = x x = 5 y = z y = 4 z = 3 - legal execution history

  10. Atomic consistency • Any read to a memory location x must return the value stored by the most recent write on x that has been done. • The order of events must coincide with the real-time occurrence of non-overlapping events

  11. Write-invalidate Protocol • Each page has an owner; • Protection modes: read, read_and_write, none • Read operation: if not locally available, then obtain a read-only copy. Set protection mode to read. • Write operation: Contact the current owner; get the page and its ownership; send invalidate messages to nodes that have copies; sets the protection to read_and_write;

  12. Write-through Protocol • Multiprocessor with snooping cache • Read operation: if variable not in cache, then read from main memory and cache it. Else, read from the cache. • Write operation: update shared memory and invalidate cache entries.

  13. Sequential Consistency • Lamport 1979 • A multiprocessor system is sequentiallyconsistent if the result of any execution is the same as if the operations of all processors were executed in some sequential order and the operations of each individual processor appear in this sequence in the order specified by its program.

  14. 3 5 5 3 a = x b = x c = x d = x Sequential Consistency x = 3 w = x x = 5 y = z y = 4 z = 3 …….,x = 5; x = 3;w = x;y = z;y = 4;z = 3; …….

  15. Brown’s Algorithm • Each process has a queue Ini of invalidation requests

  16. Brown’s algorithm • w(x)v: perform all invalidations in In queue; update main memory; place invalidation request in In queue of each process • r(x): if x in cache then read x; else perform all invalidation in Ini read from the main memory

  17. Whenever main memory is accessed, all outstanding invalidations must be performed. • Sequential consistency is maintained.

  18. Distributed implementation • All processes maintain a local copy • Write w(x)v: send message to all processor updating x to v • Read r(x): read local copy

More Related