1 / 37

Concurrency in Shared Memory Systems

Concurrency in Shared Memory Systems. Synchronization and Mutual Exclusion. Processes, Threads, Concurrency. Traditional processes are sequential: one instruction at a time is executed. Multithreaded processes may have several sequential threads that can execute concurrently.

aysel
Télécharger la présentation

Concurrency in Shared Memory Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Concurrency in Shared Memory Systems Synchronization and Mutual Exclusion

  2. Processes, Threads, Concurrency • Traditional processes are sequential: one instruction at a time is executed. • Multithreaded processes may have several sequential threads that can execute concurrently. • Processes (threads) are concurrent if their executions overlap – start time of one occurs before finish time of another.

  3. Concurrent Execution • On a uniprocessor, concurrency occurs when the CPU is switched from one process to another, so the instructions of several threads are interleaved (alternate) • On a multiprocessor, execution of instructions in concurrent threads may be overlapped (occur at same time) if the threads are running on separate processors.

  4. Concurrent Execution • An interrupt, followed by a context switch, can take place between any two instructions. • Hence the pattern of instruction overlapping and interleaving is unpredictable. • Processes and threads execute asynchronously – we cannot predict if event a in process i will occur before event b in process j.

  5. Sharing and Concurrency • System resources (files, devices, even memory) are shared by processes, threads, the OS. Uncontrolled access to shared entities can cause data integrity problems – • Example: Suppose two threads (1 and 2) have access to a shared (global) variable “balance”, which represents a bank account. • Each thread has its own private (local) variable “withdrawali”, where i is the thread number

  6. Example • Let balance = 100, withdrawal1=50, and withdrawal2 = 75. • Threadi will execute the following algorithm:if (balance >= withdrawali) balance = balance – withdrawali else // print “Can’t overdraw account!” • If thread1 executes first, balance will be 50 and thread2 can’t withdraw funds. • If thread2 executes first, balance will be 25 and thread1 can’t withdraw funds.

  7. But --- what if the two threads execute concurrently instead of sequentially? • Break down into machine-level operations:if (balance >= withdrawali) balance = balance – withdrawalimove balance to a registercompare register to withdrawali branch if less-than register = register – withdrawali store register contents in balance

  8. Thread 1(2) Move balance to register1 (register = 100)(4) compare register1 towithdraw1(5)register1 = register1 – withdraw1 (100-50)(7) store register1 in balance (balance = 50) Thread 2(1) Move balance to register2 (register = 100)(3) compare register2 to withdraw2(6) register2 = register2 – withdraw2 (100 – 75)(8) store register2 in balance (balance = 25) Example-Multiprocessor(A possible instruction sequence showing interleaved execution)

  9. Thread 1 Move balance to register (Reg. = 100)P1’s time slice expires – its state is saved……P1 is re-scheduled; its state is restored (Reg. = 100) balance = balance – withdraw1 (100-50) Result: balance = 50 Thread 2 Move balance to reg. balance >= withdraw2 balance = balance – withdraw2 = (100-75) Example – Uniprocessor(A possible instruction sequence showing interleaved execution)

  10. Race Conditions • The previous examples illustrate a race condition (data race): an undesirable condition that exists when several processes access shared data, and • At least one access is a write and • The accesses are not mutually exclusive • Race conditions can lead to inconsistent results.

  11. Mutual Exclusion • Mutual exclusion forces serial resource access as opposed to concurrent access. • When one thread locks a critical resource, no other thread can access it until the lock is released. • Critical section (CS): code that accesses shared resources. • Mutual exclusion guarantees that only one process/thread at a time can execute its critical section, with respect to a given resource.

  12. Mutual Exclusion Requirements • It must ensure that only one process/thread at a time can access a shared resource. • In addition, a good solution will ensure that • If no thread is in the CS a thread that wants to execute its CS must be allowed to do so • When 2 or more threads want to enter their CS’s, can’t postpone decision indefinitely • Every thread should have a chance to execute its critical section (no starvation)

  13. Solution Model • Begin_mutual_exclusion /* some mutexprimitiveexecute critical sectionEnd_mutual_exclusion /* some mutexprimitive • The problem: How to implement the mutex primitives? • Busy wait solutions (e.g., test-set operation, spinlocks of various sorts, Peterson’s algorithm) • Semaphores (OS feature usually, blocks waiting process) • Monitors (language feature – e.g. Java)

  14. Semaphores • Definition: an integer variable on which processes can perform two indivisible operations, P( ) and V( ), + initialization. (P and V sometimes called Wait & Signal) • Each semaphore has a wait queue associated with it. • Semaphores are protected by the operating system.

  15. Semaphores • Binary semaphore: only values are 1 and 0 • Traditional semaphore: may be initialized to any non-negative value; can count down to zero. • Counting semaphores: P & V operations may reduce semaphore values below 0, in which case the negative value records the number of blocked processes. (See CS 490 textbook)

  16. Semaphores • Are used to synchronize and coordinate processes and/or threads • Calling the P (wait) operation may cause a process to block • Calling the V (signal) operation never causes a process to block, but may wake a process that has been blocked by a previous P operation.

  17. P(S): if S > = 1then S = S – 1else block the process on S queue V(S): if some processes are blocked on S queue then unblock a processelse S = S + 1 P(S): S = S – 1if ( S < 0) then block the process on S queue V(S): S = S + 1if (S <= 0)then move a process from S queue to the Ready queue Traditional Semaphore Counting Semaphore

  18. Usage – Mutual Exclusion • Using a semaphore to enforce mutual exclusion. P(mutex) // mutex initially = 1 execute CS; V(mutex) • Each process that uses a shared resource must first check (using P) that no other process is in the critical section and then must use V to release the critical section.

  19. Thread 1 P(S) Move balance to register1Compare register1 towithdraw1register1 = register1 – withdraw1 Store register1 in balance V(S) Thread 2 P(S) Move balance to register2Compare register2 to withdraw2register2 = register2 – withdraw2Store register2 in balance V(S) Bank Problem Revisited Semaphore S = 1

  20. Example – Uniprocessor • Thread 1 • P(S) S is decremented: S = 0, T1 continues to execute • Move balance to register (Reg. = 100)T1’s time slice expires – its state is saved…T1 is re-scheduled; its state is restored (Reg. = 100) • balance = balance – withdraw1 (100-50) • V(S)Thread 2 returns to run state, S remains 0 • Thread 2 • P(S)Since S = 0, T2 is blockedT2 resumes executing some time after T1 executes V(S) • Move balance to reg. (50) • balance >= withdraw2 Since !(50>=75), T2 does not make withdrawal • V(S)Since no thread is waiting, S is set back to 1

  21. Critical Sections are Indivisible • The effect of mutual exclusion is to make a critical section appear to be “indivisible” – much like a hardware instruction. (Recall the atomic nature of a transaction) • In the bank example, once T1enters its critical section no other thread is allowed to operate on balance until T1 signals it has left the CS. (assumes that all users employ mutual exclusion)

  22. Implementing Semaphores:P and V Must Be Indivisible • Semaphore operations themselves must be indivisible, or atomic; i.e., execute under mutual exclusion. • Once OS begins to execute a P or V operation, it cannot allow another P or V to begin on the same semaphore.

  23. P and V Must Be Indivisible • P operation must be indivisible; otherwise there is no guarantee that two processes won’t try to test P at the “same” time and both find it equal to 1. • P(S): if S > = 1 then S = S – 1 else block the process on S queue • Two V operations executed at the same time could unblock two processes, leading to two processes in their critical sections concurrently. • V(S): if some processes are blocked on the queue forS then unblock a process else S = S + 1

  24. if S >= 1 then S = S – 1 else block the process on S queue execute critical section if processes are blocked on the queue forS then unblock a processelse S = S + 1

  25. Semaphore Usage – Event Wait(synchronization that isn’t mutex) • Suppose a process P2 wants to wait on an event of some sort (call it A) which is to be executed by another process P1 • Initialize a shared semaphore to 0 • By executing a wait (P) on the semaphore, P2 will wait until P1 executes event A and signals, using the V operation.

  26. Process 1 …. execute event A V(signal) Process 2 … P(signal) … Event Wait – Examplesemaphore signal = 0;

  27. Semaphores Are Not Perfect • Programmer must know something about other processes using the semaphore • Must use semaphores carefully (be sure to use them when needed; don’t leave out a V(), etc.) • Hard to prove program correctness when using semaphores.

  28. Other Synchronization Problems(in addition to simple mutual exclusion) • Dining Philosophers: resource deadlock • Producer-consumer: buffering (as of messages, input data, etc.) • Readers-writers: data base or file sharing • Reader’s priority • Writer’s priority

  29. Producer-Consumer • Producer processes and consumer processes share a (usually finite) pool of buffers. • Producers add data to pool • Consumers remove data, in FIFO order

  30. Producer-Consumer Requirements • The processes are asynchronous. A solution must ensure producers don’t deposit data if pool is full and consumers don’t take data if pool is empty. • Access to buffer pool must be mutually exclusive since multiple consumers (or producers) may try to access the pool simultaneously.

  31. Bounded Buffer P/C Algorithm Initialization: s=1; n=0; e=sizeofbuffer; Producer: while(true) produce v; P(e); // wait for buffer slot P(s); // wait for buffer pool access append(v); V(s); // release buffer pool V(n); // signal a full buffer Consumer: while(true) P(n); // wait for a full buffer P(s); // wait for buffer pool access w:=take(); V(s); // release buffer pool V(e); // signal an empty buffer consume(w);

  32. Readers and Writers Problem • Characteristics: • concurrent processes access shared data area (files, block of memory, set of registers) • some processes only read information, others write (modify and add) information • Restrictions: • Multiple readers may read concurrently, but when a writer is writing, there should be no other writers or readers.

  33. Compare to Prod/Cons • Differences between Readers/Writers (R/W) and Producer/Consumer (P/C): • Data in P/C is ordered - placed into buffer and retrieved according to FIFO discipline. All data is read exactly once. • In R/W, same data may be read many times by many readers, or data may be written by writer and changed before any reader reads. No order enforced on reads.

  34. procedure writer; begin repeat P (wsem); write data; V (wsem); forever end; // Initialization code integer readcount = 0; // done only once semaphore x, wsem = 1; // done only once procedure reader; begin repeat P (x); readcount = readcount + 1; if readcount = =1 then P (wsem); V (x); read data; P (x); readcount = readcount - 1; if readcount == 0 then V(wsem); V (x); forever end;

  35. Any Questions? Can you think of any real examples of producer-consumer or reader-writer situations?

  36. Semaphores and User Thread Library • Thread libraries can simulate real semaphores. • In a multi-(user-level) threaded process the OS only sees a single thread of execution; e.g.,T1, T1, T1, L, L, T2, T2, L, L, T1, T1, … • Library functions execute when a u-thread voluntarily yields control • Use a variable as a semaphore; access via P & V functions. A thread executes P(S) and finds S = 0. Then it yields control.

  37. Semaphores and User Thread Library • Why is this safe? Because there is really never more than one thread of control – violations of mutual exclusion happen when separate threads are scheduled concurrently. • A user-level thread decides when to yield control; kernel-level threads don’t. • If the library is asked to execute P(S) or V(S) it will not be interrupted by another thread in the same process, so there is no danger.

More Related